dw2

15 May 2022

A year-by-year timeline to 2045

The ground rules for the worldbuilding competition were attractive:

  • The year is 2045.
  • AGI has existed for at least 5 years.
  • Technology is advancing rapidly and AI is transforming the world sector by sector.
  • The US, EU and China have managed a steady, if uneasy, power equilibrium.
  • India, Africa and South America are quickly on the ride as major players.
  • Despite ongoing challenges, there have been no major wars or other global catastrophes.
  • The world is not dystopian and the future is looking bright.

Entrants were asked to submit four pieces of work. One was a new media piece. I submitted this video:

Another required piece was:

timeline with entries for each year between 2022 and 2045 giving at least two events (e.g. “X invented”) and one data point (e.g. “GDP rises by 25%”) for each year.

The timeline I created dovetailed with the framework from the above video. Since I enjoyed creating it, I’m sharing my submission here, in the hope that it may inspire readers.

(Note: the content was submitted on 11th April 2022.)

2022

US mid-term elections result in log-jammed US governance, widespread frustration, and a groundswell desire for more constructive approaches to politics.

The collapse of a major crypto “stablecoin” results in much wider adverse repercussions than was generally expected, and a new social appreciation of the dangers of flawed financial systems.

Data point: Number of people killed in violent incidents (including homicides and armed conflicts) around the world: 590,000

2023

Fake news that is spread by social media driven by a new variant of AI provokes riots in which more than 10,000 people die, leading to much greater interest a set of “Singularity Principles” that had previously been proposed to steer the development of potentially world-transforming technologies.

G7 transforms into the D16, consisting of the world’s 16 leading democracies, proclaiming a profound shared commitment to champion norms of: openness; free and fair elections; the rule of law; independent media, judiciary, and academia; power being distributed rather than concentrated; and respect for autonomous decisions of groups of people.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 6.4%

2024

South Korea starts a trial of a nationwide UBI scheme, in the first of what will become in later years a long line of increasingly robust “universal citizens’ dividends” schemes around the world.

A previously unknown offshoot of ISIS releases a bioengineered virus. Fortunately, vaccines are quickly developed and deployed against it. In parallel, a bitter cyber war takes place between Iran and Israel. These incidents lead to international commitments to prevent future recurrences.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 38%

2025

Extreme weather – floods and storms – kills 10s of 1000s in both North America and Europe. A major trial of geo-engineering is rushed through, with reflection of solar radiation in the stratosphere – causing global political disagreement and then a renewed determination for tangible shared action on climate change.

The US President appoints a Secretary for the Future as a top-level cabinet position. More US states adopt rank choice voting, allowing third parties to grow in prominence.

Data point: Proportion of earth’s habitable land used to rear animals for human food: 38%

2026

A song created entirely by an AI tops the hit parade, and initiates a radical new musical genre.

Groundswell opposition to autocratic rule in Russia leads to the fall from power of the president and a new dedication to democracy throughout countries formerly perceived as being within Russia’s sphere of direct influence.

Data point: Net greenhouse gas emissions (including those from land-use changes): 59 billion tons of CO2 equivalent – an unwelcome record.

2027

Metformin approved for use as an anti-aging medicine in a D16 country. Another D16 country recommends nationwide regular usage of a new nootropic drug.

Exchanges of small numbers of missiles between North and South Korea leads to regime change inside North Korea and a rapprochement between the long-bitter enemies.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 9.2%

2028

An innovative nuclear fusion system, with its design assisted by AI, runs for more than one hour and generates significantly more energy out than what had been put in.

As a result of disagreements about the future of an independent Taiwan, an intense destructive cyber battle takes place. At the end, the nations of the world commit more seriously than before to avoiding any future cyber battles.

Data point: Proportion of world population experiencing mental illness or dissatisfied with the quality of their mental health: 41%

2029

A trial of an anti-aging intervention in middle-aged dogs is confirmed to have increased remaining life expectancy by 25% without causing any adverse side effects. Public interest in similar interventions in humans skyrockets.

The UK rejoins a reconfigured EU, as an indication of support for sovereignty that is pooled rather than narrow.

Data point: Proportion of world population with formal cryonics arrangements: 1 in 100,000

2030

Russia is admitted into the D40 – a newly expanded version of the D16. The D40 officially adopts “Index of Human Flourishing” as more important metric than GDP, and agrees a revised version of the Universal Declaration of Human Rights, brought up to date with transhuman issues.

First permanent implant in a human of an artificial heart with a new design that draws all required power from the biology of the body rather than any attached battery, and whose pace of operation is under the control of the brain.

Data point: Net greenhouse gas emissions (including those from land-use changes): 47 billion tons of CO2 equivalent – a significant improvement

2031

An AI discovers and explains a profound new way of looking at mathematics, DeepMath, leading in turn to dramatically successful new theories of fundamental physics.

Widespread use of dynamically re-programmed nanobots to treat medical conditions that would previously have been fatal.

Data point: Proportion of world population regularly taking powerful anti-aging medications: 23%

2032

First person reaches the age of 125. Her birthday celebrations are briefly disrupted by a small group of self-described “naturality advocates” who chant “120 is enough for anyone”, but that group has little public support.

D40 countries put in place a widespread “trustable monitoring system” to cut down on existential risks (such as spread of WMDs) whilst maintaining citizens’ trust.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 35.7% 

2033

For the first time since the 1850s, the US President comes from a party other than Republican and Democratic.

An AI system is able to convincingly pass the Turing test, impressing even the previous staunchest critics with its apparent grasp of general knowledge and common sense. The answers it gives to questions of moral dilemmas also impress previous sceptics.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 58%

2034

The D90 (expanded from the D40) agrees to vigorously impose Singularity Principles rules to avoid inadvertent creation of dangerous AGI.

Atomically precise synthetic nanoscale assembly factories have come of age, in line with the decades-old vision of nanotechnology visionary Eric Drexler, and are proving to have just as consequential an impact on human society as AI.

Data point: Net greenhouse gas *removals*: 10 billion tons of CO2 equivalent – a dramatic improvement

2035

A novel written entirely by an AI reaches the top of the New York Times bestseller list, and is widely celebrated as being the finest piece of literature ever produced.

Successful measures to remove greenhouse gases from the atmosphere, coupled with wide deployment of clean energy sources, lead to a declaration of “victory over runaway climate change”.

Data point: Proportion of earth’s habitable land used to rear animals for human food: 4%

2036

A film created entirely by an AI, without any real human actors, wins Oscar awards.

The last major sceptical holdout, a philosophy professor from an Ivy League university, accepts that AGI now exists. The pope gives his blessing too.

Data point: Proportion of world population with cryonics arrangements: 24%

2037

The last instances of the industrial scale slaughter of animals for human consumption, on account of the worldwide adoption of cultivated (lab-grown) meat.

AGI convincingly explains that it is not sentient, and that it has a very different fundamental structure from that of biological consciousness.

Data point: Proportion of world population who are literate: 99.3%

2038

Rejuvenation therapies are in wide use around the world. “Eighty is the new fifty”. First person reaches the age of 130.

Improvements made by AGI upon itself effectively raise its IQ one hundred fold, taking it far beyond the comprehension of human observers. However, the AGI provides explanatory educational material that allows people to understand vast new sets of ideas.

Data point: Proportion of world population who consider themselves opposed to AGI: 0.1%

2039

An extensive set of “vital training” sessions has been established by the AGI, with all citizens over the age of ten participating for a minimum of seven hours per day on 72 days each year, to ensure that humans develop and maintain key survival skills.

Menopause reversal is common place. Women who had long ago given up any ideas of bearing another child happily embrace motherhood again.

Data point: Proportion of world population regularly taking powerful anti-aging medications: 99.2%

2040

The use of “mind phones” is widespread: new brain-computer interfaces that allow communication between people by mental thought alone.

People regularly opt to have several of their original biological organs replaced by synthetic alternatives that are more efficient, more durable, and more reliable.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 96%

2041

Shared immersive virtual reality experiences include hyper-realistic simulations of long-dead individuals – including musicians, politicians, royalty, saints, and founders of religions.

The number of miles of journey undertaken by small “flying cars” exceeds that of ground-based powered transport.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 100.0%

2042

First successful revival of mammal from cryopreservation.

AGI presents a proof of the possibility of time travel, but the resources required for safe transit of humans through time would require the equivalent of building a Dyson sphere around the sun.

Data point: Proportion of world population experiencing mental illness or dissatisfied with the quality of their mental health: 0.4%

2043

First person reaches the age of 135, and declares herself to be healthier than at any time in the preceding four decades.

As a result of virtual reality encounters of avatars of founders of religion, a number of new systems of philosophical and mystical thinking grow in popularity.

Data point: Proportion of world’s energy provided by earth-based nuclear fusion: 75%

2044

First human baby born from an ectogenetic pregnancy.

Family holidays on the Moon are an increasingly common occurrence.

Data point: Average amount of their waking time that people spend in a metaverse: 38%

2045

First revival of human from cryopreservation – someone who had been cryopreserved ten years previously.

Subtle messages decoded by AGI from far distant stars in the galaxy confirm that other intelligent civilisations exist, and are on their way to reveal themselves to humanity.

Data point: Number of people killed in violent incidents around the world: 59

Postscript

My thanks go to the competition organisers, the Future of Life Institute, for providing the inspiration for the creation of the above timeline.

Readers are likely to have questions in their minds as they browse the timeline above. More details of the reasoning behind the scenarios involved are contained in three follow-up posts:

10 February 2013

Fixing bugs in minds and bugs in societies

Suppose we notice what appears to be bugs in our thinking processes. Should we try to fix these bugs?

Or how about bugs in the way society works? Should we try to fix these bugs too?

As examples of bugs of the first kind, I return to a book I reviewed some time ago, “Kluge: The Haphazard Construction of the Human Mind”. I entitled my review “The human mind as a flawed creation of nature”, and I still stick by that description. In that review, I pulled out the following quote from near to the end of the book:

In this book, we’ve discussed several bugs in our cognitive makeup: confirmation bias, mental contamination, anchoring, framing, inadequate self-control, the ruminative cycle, the focussing illusion, motivated reasoning, and false memory, not to mention absent-mindedness, an ambiguous linguistic system, and vulnerability to mental disorders. Our memory, contextually driven as it is, is ill suited to many of the demands of modern life, and our self-control mechanisms are almost hopelessly split. Our ancestral mechanisms were shaped in a different world, and our more modern deliberative mechanisms can’t shake the influence of that past. In every domain we have considered, from memory to belief, choice, language, and pleasure, we have seen that a mind built largely through the progressive overlay of technologies is far from perfect…

These bugs in our mental makeup are far from being harmless quirks or curiosities. They can lead us:

  • to overly trust people who have visual trappings of authority,
  • to fail to make adequate provision for our own futures,
  • to keep throwing money into bad investments,
  • and to jump to all kinds of dangerous premature conclusions.

But should we try to fix these bugs?

The field where the term ‘bug’ was first used in this sense of a mistake, software engineering, provides many cautionary tales of bug fixing going wrong:

  • Sometimes what appears to be a ‘bug’ in a piece of software turns out to be a useful ‘feature’, with a good purpose after all
  • Sometimes a fix introduces unexpected side-effects, which are worse than the bug which was fixed.

I shared an example of the second kind in the “Managing defects” chapter of the book I wrote in 2004-5, “Symbian for software leaders: principles of successful smartphone development projects”:

An embarrassing moment with defects

The first million-selling product that I helped to build was the Psion Series 3a handheld computer. This was designed as a distinct evolutionary step-up from its predecessor, the original Series 3 (often called the “Psion 3 classic” in retrospect)…

At last the day came (several weeks late, as it happened) to ship the software to Japan, where it would be flashed into large numbers of chips ready to assemble into production Series 3a devices. It was ROM version 3.20. No sooner was it sent than panic set into the development team. Two of us had independently noticed a new defect in the agenda application. If a user set an alarm on a repeating entry, and then adjusted the time of this entry, in some circumstances the alarm would fail to ring. We reasoned that this was a really bad defect – after all, two of us had independently found it.

The engineer who had written the engine for the application – the part dealing with all data manipulation algorithms, including calculating alarm times – studied his code, and came up with a fix. We were hesitant, since it was complex code. So we performed a mass code review: lots of the best brains in the team talked through the details of the fix. After twenty four hours, we decided the fix was good. So we recalled 3.20, and released 3.21 in its place. To our relief, no chips were lost in the process: the flashing had not yet started.

Following standard practice, we upgraded the prototype devices of everyone in the development team, to run 3.21. As we waited for the chips to return, we kept using our devices – continuing (in the jargon of the team) to “eat our own dog food”. Strangely, there were a few new puzzling problems with alarms on entries. Actually, it soon became clear these problems were a lot worse than the problem that had just been fixed. As we diagnosed these new problems, a sinking feeling grew. Despite our intense care (but probably because of the intense pressure) we had failed to fully consider all the routes through the agenda engine code; the change made for 3.21 was actually a regression on previous behaviour.

Once again, we made a phone call to Japan. This time, we were too late to prevent some tens of thousands of wasted chips. We put the agenda engine code back to its previous state, and decided that was good enough! (Because of some other minor changes, the shipping version number was incremented to 3.22.) We decided to live with this one defect, in order not to hold up production any longer.

We were expecting to hear more news about this particular defect from the Psion technical support teams, but the call never came. This defect never featured on the list of defects reported by end users. In retrospect, we had been misled by the fact that two of us had independently found this defect during the final test phase: this distorted our priority call…

That was an expensive mistake, which seared a cautionary attitude into my own brain, regarding the dangers of last-minute changes to complex software. All seasoned software engineers have similar tales they can tell, from their own experience.

If attempts to fix defects in software are often counter-productive, how much more dangerous are attempts to fix defects in our thinking processes – or defects in how our societies operate! At least in the first case, we generally still have access to the source code, and to the design intention of the original software authors. For the other examples, the long evolutionary history that led to particular designs is something at which we can only guess. It would be like trying to fix a software bug, that somehow results from the combination of many millions of lines of source code, written decades ago by people who left no documentation and who are not available for consultation.

What I’ve just stated is a version of an argument that conservative-minded thinkers often give, against attempts to try to conduct “social engineering” or “improve on nature”. Tinkering with ages-old thinking processes – or with structures within societies – carries the risk that we fail to appreciate many hidden connections. Therefore (the argument runs) we should desist from any such experimentation.

Versions of this argument appeared, from two different commentators, in responses to my previous blogpost. One put it like this:

The trouble is that ‘cognitive biases and engrained mistakes’ may appear dysfunctional but they are, in fact, evolutionarily successful adaptations of humanity to its highly complex environment. These, including prejudice, provide highly effective means for the resolution of really existing problems in human capacity…

Rational policies to deal with human and social complexity have almost invariably been proved to be inhumane and brutal, fine for the theoretician in the British Library, but dreadful in the field.

Another continued the theme:

I have much sympathy for [the] point about “cognitive biases and engrained mistakes”. The belief that one has identified cognitive bias in another or has liberated oneself from such can be a “Fatal Conceit,” to borrow a phrase from Hayek, and has indeed not infrequently given rise to inhumane treatment even of whole populations. One of my favourite sayings is David Hume’s “the rules of morality are not conclusions of our reason,” which is at the heart of Hayek’s Fatal Conceit argument.

But the conclusion I draw is different. I don’t conclude, “Never try to fix bugs”. After all, the very next sentence from my chapter on “Managing defects” stated, “We eventually produced a proper fix several months later”. Indeed, many bugs do demand urgent fixes. Instead, my conclusion is that bug fixing in complex systems needs a great deal of careful thought, including cautious experimentation, data analysis, and peer review.

The analogy can be taken one more step. Suppose that a software engineer has a bad track record in his or her defect fixes. Despite claiming, each time, to be exercising care and attention, the results speak differently: the fixes usually make things worse. Suppose, further, that this software engineer comes from a particular company, and that fixes from that company have the same poor track record. (To make this more vivid, the name of this company might be “Technocratic solutions” or “Socialista” or “Utopia software”. You can probably see where this argument is going…) That would be a reason for especial discomfort if someone new from that company is submitting code changes in attempts to fix a given bug.

Well, something similar happens in the field of social change. History has shown, in many cases, that attempts at mental engineering and social engineering were counter-productive. For that reason, many conservatives support various “precautionary principles”. They are especially fearful of any social changes proposed by people they can tar with labels such as “technocratic” or”socialist” or “utopian”.

These precautionary principles presuppose that the ‘cure’ will be worse than the ‘disease’. However, I personally have greater confidence in the fast improving power of new fields of science, including the fields that study our mind and brain. These improvements are placing ever greater understanding in our hands – and hence, ever greater power to fix bugs without introducing nasty side-effects.

For these reasons, I do look forward (as I said in my previous posting) to these improvements

helping individuals and societies rise above cognitive biases and engrained mistakes in reasoning… and accelerating a reformation of the political and economic environment, so that the outcomes that are rationally best are pursued, instead of those which are expedient and profitable for the people who currently possess the most power and influence.

Finally, let me offer some thoughts on the observation that “the rules of morality are not conclusions of our reason”. That observation is vividly supported by the disturbing “moral dumbfounding” examples discussed by Jonathan Haidt in his excellent book “The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” (which I briefly reviewed here). But does that observation mean that we should stop trying to reason with people about moral choices?

MoralLandscapeHere, I’ll adapt comments from my review of “The Moral Landscape: How Science Can Determine Human Values”, by Sam Harris.

That book considers how we might go about finding answers to big questions such as “how should I live?” and “what makes some ways of life more moral than others?”  As some specific examples, how should we respond to:

  • The Taliban’s insistence that the education of girls is an abomination?
  • The stance by Jehovah’s Witnesses against blood transfusion?
  • The prohibition by the Catholic Church of the use of condoms?
  • The legalisation of same-sex relationships?
  • The use of embryonic stem cells in the search for cures of diseases such as Alzheimer’s and Parkinson’s?
  • A would-be Islamist suicide bomber who is convinced that his intended actions will propel him into a paradise of abundant mental well-being?

One response is that such questions are the province of religion. The correct answers are revealed via prophets and/or holy books.  The answers are already clear, to those with the eye of faith. It is a divine being that tells us, directly or indirectly, the difference between good and evil. There’s no need for experimental investigations here.

A second response is that the main field to study these questions is that of philosophy. It is by abstract reason, that we can determine the difference between good and evil.

But Sam Harris, instead, primarily advocates the use of the scientific method. Science enters the equation because it is increasingly able to identify:

  • Neural correlates (or other physical or social underpinnings) of sentient well-being
  • Cause-and-effect mechanisms whereby particular actions typically bring about particular changes in these neural correlates.

With the help of steadily improving scientific understanding, we can compare different actions based on their likely effects on sentient well-being. Actions which are likely to magnify sentient well-being are good, and those which are likely to diminish it are evil. That’s how we can evaluate, for example, the Taliban’s views on girls’ education.

As Harris makes clear, this is far from being an abstract, other-worldly discussion. Cultures are clashing all the time, with lots of dramatic consequences for human well-being. Seeing these clashes, are we to be moral relativists (saying “different cultures are best for different peoples, and there’s no way to objectively compare them”) or are we to be moral realists (saying “some cultures promote significantly more human flourishing than others, and are to be objectively preferred as a result”)? And if we are to be moral realists, do we resolve our moral arguments by deference to religious tradition, or by open-minded investigation of real-world connections?

In the light of these questions, here are some arguments from Harris’s book that deserve thought:

  • There’s a useful comparison between the science of human values (the project espoused by Harris), and a science of diets (what we should eat, in order to enjoy good health).  In both cases, we’re currently far from having all the facts.  And in both cases, there are frequently several right answers.  But not all diets are equally good.  Similarly, not all cultures are equally good.  And what makes one diet better than another will be determined by facts about the physical world – such as the likely effects (direct and indirect) of different kinds of fats and proteins and sugars and vitamins on our bodies and minds.  While people still legitimately disagree about diets, that’s not a reason to say that science can never answer such questions.  Likewise, present-day disagreements about specific causes of happiness, mental flourishing, and general sentient well-being, do not mean these causes fail to exist, or that we can never know them.
  • Likewise with the science of economics.  We’re still far from having a complete understanding of how different monetary and financial policies impact the long-term health of the economy.  But that doesn’t mean we should throw up our hands and stop searching for insight about likely cause and effect.  The discipline of economics, imperfect though it is, survives in an as-yet-incomplete state.  The same goes for political science too.  And, likewise, for the science of the moral landscape.
  • Attempts to reserve some special area of “moral insight” for religion are indefensible.  As Harris says, “How is it that most Jews, Christians, and Muslims are opposed to slavery? You don’t get this moral insight from scripture, because the God of Abraham expects us to keep slaves. Consequently, even religious fundamentalists draw many of their moral positions from a wider conversation about human values that is not, in principle, religious.” That’s the conversation we need to progress.

PS I’ve written more about cognitive biases and cognitive dissonance – and how we can transcend these mistakes – in my blogpost “Our own entrenched enemies of reason”.

1 April 2012

Why good people are divided by politics and religion

Filed under: books, collaboration, evolution, motivation, passion, politics, psychology, RSA — David Wood @ 10:58 pm

I’ve lost count of the number of people who have thanked me over the years for drawing their attention to the book “The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” written by Jonathan Haidt, Professor of Social Psychology at the University of Virginia. That was a book with far-reaching scope and penetrating insight. Many of the ideas and metaphors in it have since become fundamental building blocks for other writers to use – such as the pithy metaphor of the human mind being divided like a rider on an elephant, with the job of the rider (our stream of conscious reasoning) being to serve the elephant (the other 99% of our mental processes).

This weekend, I’ve been reading Haidt’s new book, “The Righteous Mind: Why Good People Are Divided by Politics and Religion”. It’s a great sequel. Like its predecessor, it ranges across more than 2,400 years of thought, highlighting how recent research in social psychology sheds clear light on age-old questions.

Haidt’s analysis has particular relevance for two deeply contentious sets of debates that each threaten to destabilise and divide contemporary civil society:

  • The “new atheism” critique of the relevance and sanctity of religion in modern life
  • The political fissures that are coming to the fore in the 2012 US election year – fissures I see reflected in messages full of contempt and disdain in the Facebook streams of some several generally sensible US-based people I know.

There’s so much in this book that it’s hard to summarise it without doing an injustice to huge chunks of fascinating material:

  • the importance of an empirical approach to understanding human morality – an approach based on observation, rather than on a priori rationality
  • moral intuitions come first, strategic reasoning comes second, to justify the intuitions we have already reached
  • there’s more to morality than concerns over harm and fairness; Haidt memorably says that “the righteous mind is like a tongue with six taste receptors”
  • the limitations of basing research findings mainly on ‘WEIRD‘ participants (people who are Western, Educated, Industrialised, Rich, and Democratic)
  • the case for how biological “group selection” helped meld humans (as opposed to natural selection just operating at the level of individual humans)
  • a metaphor that “human beings are 90 percent chimp and 10 percent bee”
  • the case that “The most powerful force ever known on this planet is human cooperation — a force for construction and destruction”
  • methods for flicking a “hive switch” inside human brains that open us up to experiences of self-transcendence (including a discussion of rave parties).

The first chapter of the book is available online – as part of a website dedicated to the book. You can also get a good flavour of some of the ideas in the book from two talks Haidt has given at TED: “Religion, evolution, and the ecstasy of self-transcendence” (watch it full screen to get the full benefits of the video effects):

and (from a few years back – note that Haidt has revised some of his thinking since the date of this talk) “The moral roots of liberals and conservatives“:

Interested to find out more? I strongly recommend that you read the book itself. You may also enjoy watching a wide-ranging hour-long interview between Haidt and Robert Wright – author of Nonzero: The Logic of Human Destiny and The Evolution of God.

Footnote: Haidt is talking at London’s Royal Society of Arts on lunchtime on Tuesday 10th April; you can register to be included on the waiting list in case more tickets become available. The same evening, he’ll be speaking at the Royal Institution; happily, the Royal Institution website says that there is still “good availability” for tickets:

Jonathan Haidt, the highly influential psychologist, is here to show us why we all find it so hard to get along. By examining where morality comes from, and why it is the defining characteristic of humans, Haidt will show why we cannot dismiss the views of others as mere stupidity or moral corruption. Our moral roots run much deeper than we realize. We are hardwired not just to be moral, but moralistic and self-righteous. From advertising to politics, morality influences all aspects of behaviour. It is the key to understanding everybody. It explains why some of us are liberals, others conservatives. It is often the difference between war and peace. It is also why we are the only species that will kill for an ideal.

Haidt argues we are always talking past each other because we are appealing to different moralities: it is not just about justice and fairness – for some people authority, sanctity or loyalty are more important. With new evidence from his own empirical research, Haidt will show it is possible to liberate us from the disputes that divide good people. We can either stick to comforting delusions about others, or learn some moral psychology. His hope is that ultimately we can cooperate with those whose morals differ from our own.

27 July 2011

Eclectic guidance for big life choices

Filed under: books, challenge, Economics, evolution, leadership, market failure, psychology, risks, strategy — David Wood @ 10:34 pm

“If you’re too busy to write your normal blog posts, at least tell us what books you’ve liked reading recently.”

That’s a request I’ve heard in several forms over the last month or so, as I’ve been travelling widely on work-related assignments.  On these travels, I’ve met several people who were kind enough to mention that they enjoyed reading my blog posts – especially those postings recommending books to read.

In response to this suggestion, let me highlight four excellent books that I’ve read recently, which have each struck me as having something profound to say on the Big Topic of how to make major life choices.

Adapt: Why Success Always Starts with Failure, by Tim Harford

Adapt: Why Success Always Starts with Failure draws out all sorts of surprising “aha!” connections between different areas of life, work, and society.  The analysis ranges across the wars in Iraq, the comparative strengths and weaknesses of Soviet-style centrally planned economies, the unorthodox way the development of the Spitfire fighter airplane was funded, the “Innovator’s Dilemma” whereby one-time successful companies are often blindsided by emerging new technologies, different approaches to measuring the effectiveness of charitable aid donations, the risk of inadvertently encouraging perverse behaviours when setting grand over-riding incentives, the over-bearing complexity of modern technology, the causes of the great financial crash of 2008-2009, reasons why safety systems break down, approaches to tackling climate change, and the judicious use of prizes to encourage successful breakthrough innovation.  Yes, this is a real intellectual roller-coaster, with some unexpected twists along the way – revelations that had me mouthing “wow, wow” under my breath.

And as well as heroes, there are villains.  (Donald Rumsfeld comes out particularly badly in these pages – even though he’s clearly in some ways a very bright person.  That’s an awful warning to the others among us who rejoice in above-average IQs.)

The author, Tim Harford, is an economist, but this book is grounded in observations about Darwinian evolution.  Three pieces of advice pervade the analysis – advice that Harford dubs “Palchinsky Principles”, in honour of Peter Palchinsky, a Russian mining engineer who was incarcerated and executed by Stalin’s government in 1929 after many years of dissent against the human cost of the Soviet top-down command and control approach to industrialisation.  These principles are designed to encourage stronger innovation, better leadership, and more effective policies, in the face of complexity and unknowns.  The principles can be summarised as follows:

  1. Variation – seek out new ideas and try new ideas
  2. Survivability – when trying something new, do it on a scale where failure is survivable
  3. Selection – seek out feedback and learn from mistakes as you go along, avoiding an instinctive reaction of denial.

Harford illustrates these principles again and again, in the context of the weighty topics already listed, including major personal life choices as well as choices for national economies and international relations.  The illustrations are full of eye-openers.  The book’s subtitle is a succinct summary: “success always stars with failure”.  The notion that it’s always possible to “get it right the first time” is a profound obstacle to surviving the major crises that lie ahead of us.  We all need a greater degree of openness to smart experimentation and unexpected feedback.

The Moral Landscape: How Science Can Determine Human Values, by Sam Harris

That thought provides a strong link to the second book I wish to mention: The Moral Landscape: How Science Can Determine Human Values.  It’s written by Sam Harris, who I first came to respect when I devoured his barnstorming The End of Faith: Religion, Terror, and the Future of Reason a few years ago.

In some ways, the newer book is even more audacious.  It considers how we might go about finding answers to big questions such as “how should I live?” and “what makes some ways of life more moral than others?”  As some specific examples, how should we respond to:

  • The Taliban’s insistence that the education of girls is an abomination?
  • The stance by Jehovah’s Witnesses against blood transfusion?
  • The prohibition by the Catholic Church of the use of condoms?
  • The legalisation of same-sex relationships?
  • The use of embryonic stem cells in the search for cures of diseases such as Alzheimer’s and Parkinson’s?
  • A would-be Islamist suicide bomber who is convinced that his intended actions will propel him into a paradise of abundant mental well-being?

One response is that such questions are the province of religion.  The correct answers are revealed via prophets and/or holy books.  The answers are already clear, to those with the eye of faith.  It is a divine being that tells us, directly or indirectly, the difference between good and evil.  There’s no need for experimental investigations here.

A second response is that the main field to study these questions is that of philosophy.  It is by reason, that we can determine the difference between good and evil.

But Sam Harris, instead, primarily advocates the use of the scientific method.  Science enters the equation because it is increasingly able to identify:

  • Neural correlates (or other physical or social underpinnings) of sentient well-being
  • Cause-and-effect mechanisms whereby particular actions typically bring about particular changes in these neural correlates.

With the help of steadily improving scientific understanding, we can compare different actions based on their likely effects on sentient well-being.  Actions which are likely to magnify sentient well-being are good, and those which are likely to diminish it are evil.  It’s no defense of an action that it makes sense within an archaic, pre-scientific view of the world – a view in which misfortunes are often caused by witches’ spells, angry demons, or spiteful disembodied minds.

Here, “science” means more than the findings of any one branch of science, whether that is physics, biology, psychology, or sociology.  Instead, it is the general disciplined outlook on life that seeks to determine objective facts and connections, and which is open to making hypotheses, gathering data in support of these hypotheses, and refining hypotheses in the light of experimental findings.  As science finds out more about the causes of human well-being in a wide variety of circumstances, we can speak with greater confidence about matters which, formerly, caused people to defer to either religion or philosophy.

Unsurprisingly, the book has stirred up a raucous hornet’s nest of criticism.  Harris addresses most of these criticisms inside the book itself (which suggests that many reviewers were failing to pay attention) and picks up the discussion again on his blog. He summarises his view as follows:

Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena… fully constrained by the laws of Nature (whatever these turn out to be in the end). Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science. On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

As Harris makes clear, this is far from being an abstract, other-worldly discussion.  Cultures are clashing all the time, with lots of dramatic consequences for human well-being.  Seeing these clashes, are we to be moral relativists (saying “different cultures are best for different peoples, and there’s no way to objectively compare them”) or are we to be moral realists (saying “some cultures promote significantly more human flourishing than others, and are to be objectively preferred as a result”)?  And if we are to be moral realists, do we resolve our moral arguments by deference to religious tradition, or by open-minded investigation of real-world connections (investigations such as those proposed, indeed,  by Tim Harford in “Adapt”)?  In the light of these questions, here are some arguments that deserve thought:

  • There’s a useful comparison between the science of human values (the project espoused by Harris), and a science of diets (what we should eat, in order to enjoy good health).  In both cases, we’re currently far from having all the facts.  And in both cases, there are frequently several right answers.  But not all diets are equally good.  Similarly, not all cultures are equally good.  And what makes one diet better than another will be determined by facts about the physical world – such as the likely effects (direct and indirect) of different kinds of fats and proteins and sugars and vitamins on our bodies and minds.  While people still legitimately disagree about diets, that’s not a reason to say that science can never answer such questions.  Likewise, present-day disagreements about specific causes of happiness, mental flourishing, and general sentient well-being, do not mean these causes fail to exist, or that we can never know them.
  • Likewise with the science of economics.  We’re still far from having a complete understanding of how different monetary and financial policies impact the long-term health of the economy.  But that doesn’t mean we should throw up our hands and stop searching for insight about likely cause and effect.  The discipline of economics, imperfect though it is, survives in an as-yet-incomplete state.  The same goes for political science too.  And, likewise, for the science of the moral landscape.
  • Attempts to reserve some special area of “moral insight” for religion are indefensible.  As Harris says, “How is it that most Jews, Christians, and Muslims are opposed to slavery? You don’t get this moral insight from scripture, because the God of Abraham expects us to keep slaves. Consequently, even religious fundamentalists draw many of their moral positions from a wider conversation about human values that is not, in principle, religious.”  (I especially recommend Harris’s excoriating demolition of surprisingly spurious arguments given by Francis Collins in his surprisingly widely respected book “The Language of God: A Scientist Presents Evidence for Belief“.)

Mindsight: The New Science of Personal Transformation, by Daniel Siegel

The next book on my list serves as a vivid practical illustration of the kind of scientifically-informed insight that Harris talks about – new insight about connections between the brain and mental well-being.  Mindsight: The New Science of Personal Transformation contains numerous case histories of people who:

  • Started off lacking one or more elements of mental well-being
  • Became a patient of the author, Dr Daniel Siegel – a Harvard-trained physician
  • Followed one or other program of mindfulness – awareness and monitoring of the patterns of energy and information flowing in the brain
  • Became more integrated and fulfilled as a result.

To quote from the book’s website:

“Mindsight” [is] the potent skill that is the basis for both emotional and social intelligence. Mindsight allows you to make positive changes in your brain–and in your life.

  • Is there a memory that torments you, or an irrational fear you can’t shake?
  • Do you sometimes become unreasonably angry or upset and find it hard to calm down?
  • Do you ever wonder why you can’t stop behaving the way you do, no matter how hard you try?
  • Are you and your child (or parent, partner, or boss) locked in a seemingly inevitable pattern of conflict?

What if you could escape traps like these and live a fuller, richer, happier life?  This isn’t mere speculation but the result of twenty-five years of careful hands-on clinical work by Daniel J. Siegel, M.D… one of the revolutionary global innovators in the integration of brain science into the practice of psychotherapy. Using case histories from his practice, he shows how, by following the proper steps, nearly everyone can learn how to focus their attention on the internal world of the mind in a way that will literally change the wiring and architecture of their brain.

Siegel is, of course, aware that drugs can often play a role in addressing mental issues.  However, his preference in many cases is for patients to learn and practice various skills in mental introspection.  His belief – which he backs up by reference to contemporary scientific findings – is that practices such as meditation can change the physical structure of brain in significant ways.  (And there are times when it can relieve recurring back pain too, as in one case history covered.)

Siegel defines the mind as “an embodied and relational process that regulates the flow of energy and information”.  He goes on to say:

So how would you regulate the mind?  By developing the ability to see mental activity with more clarity and then modify it with more effectiveness… there’s something about being able to see and influence your internal world that creates more health.

Out of the many books on psychotherapy that I’ve read over the years, this is one of the very best.  The case studies are described in sufficient depth to make them absorbing.  They’re varied, as well as unpredictable.  The neuroscience in the book is no doubt simplified at times, but gels well with what I’ve picked up elsewhere.  And the repeated emphasis on “integration” provides a powerful unifying theme:

[Integration is] a process by which separate elements are linked together into a working whole…  For example, integration is at the heart of how we connect to one another in healthy ways, honoring one another’s differences while keeping our lines of communication wide open. Linking separate entities to one another—integration—is also important for releasing the creativity that emerges when the left and right sides of the brain are functioning together.

Integration enables us to be flexible and free; the lack of such connections promotes a life that is either rigid or chaotic, stuck and dull on the one hand or explosive and unpredictable on the other. With the connecting freedom of integration comes a sense of vitality and the ease of well-being. Without integration we can become imprisoned in behavioral ruts—anxiety and depression, greed, obsession, and addiction.

By acquiring mindsight skills, we can alter the way the mind functions and move our lives toward integration, away from these extremes of chaos or rigidity. With mindsight we are able to focus our mind in ways that literally integrate the brain and move it toward resilience and health.

The sections in the book on meditation are particularly interesting.  As Siegel has become aware, the techniques he recommends have considerable alignment with venerable practices from various eastern traditions – such as the practice of “mindfulness“.  However, the attraction of these techniques isn’t that they are venerable.  It is that there’s a credible scientific explanation of why they work – an explanation that is bolstered by contemporary clinical experience.

Good Strategy Bad Strategy: The Difference and Why It Matters, by Richard Rumelt

From a great book on psychotherapy, let me finish by turning to a great book on strategy – perhaps the best book on strategy that I’ve ever read: Good Strategy Bad Strategy: The Difference and Why It Matters.  The author, Richard Rumelt, Professor of Business and Society at UCLA Anderson School of Management, is a veteran analyst of strategy, who gained his first degree as long ago as 1963 (in Electrical Engineering from the University of California, Berkeley).  He speaks with an accumulated lifetime of wisdom, having observed countless incidents of both “bad strategy” and “good strategy” over five decades of active participation in industry.

“Strategy” is the word which companies often use, when justifying their longer term actions.  They do various things, they say, in pursuit of their strategic objectives.  Here, “strategy” goes beyond “business case”.  Strategy is a reason for choosing between different possible business cases – and can provide reasons for undertaking projects even in the absence of a strong business case.  By the way, it’s not just companies that talk about strategy.  Countries can have them too, as well as departments within governments.  And the same applies to individuals: someone’s personal strategy can be an explicit reason for them choosing between different possible alternative courses of action.

It’s therefore a far from ideal situation that much of what people think of as a strategy is instead, in Rumelt’s words, “fluff” or “wishful thinking”:

It’s easy to tell a bad [strategy] from a good one. A bad one is full of fluff: fancy language covering up the lack of content. Enron’s so-called strategy was littered with meaningless buzzwords explaining its aim to evolve to a state of “sophisticated value extraction”. But in reality its chief strategies could be summed up as having an electronic trading platform, being an over-the-counter broker and acting as an information provider. These are not strategies, they are just names, like butcher, baker and candlestick maker…

Bad strategy is long on goals and short on policy or action.  It assumes that goals are all you need.  It puts forward strategic objectives that are incoherent and, sometimes, totally impractical.  It uses high-sounding words and phrases to hide these failings…

The core of [good] strategy work is always the same: discovering the critical factors in a situation and designing a way of coordinating and focusing actions to deal with those factors…

Bad strategy tends to skip over pesky details such as problems.  It ignores the power of choice and focus, trying instead of accommodate a multitude of conflicting demands and interests.  Like a quarterback whose only advice to teammates is “Let’s win”, bad strategy covers up its failure to guide by embracing the language of broad goals, ambition, vision, and values.  Each of these elements is, of course, an important part of human life.  But, by themselves, they are not substitutes for the hard work of strategy…

If you fail to identify and analyse the obstacles, you don’t have a strategy.  Instead, you have either a stretch goal, a budget, or a list of things you wish would happen.

The mention of a specific company above – Enron – is an example of a striking pattern Rumelt follows throughout his book: he names guilty parties.  Other “guilty parties” identified in the midst of fascinating narratives include CEOs of Lehman Brothers, International Harvester, Ford Motor Company, DEC, Telecom Italia, and metal box manufacturer Crown Cork & Seal.

Individuals that are highlighted, in contrast, as examples of good strategy include titans from military history – General Norman Schwarzkopf, Admiral Nelson, Hannibal, and Hebrew shepherd boy David (in his confrontation with Goliath) – as well as industry figures such as Sam Walton, Steve Jobs, Intel’s Andy Grove, IBM’s Lou Gerstner, and a range of senior managers at Cisco.  The tales recounted are in many ways already well known, but in each case Rumelt draws out surprising insight.  (Rumelt’s extended account of Hannibal’s victory over the Roman army at Cannae in 216 BC indicates many unexpected implications.)

Why do so many companies, government departments, and individuals have “bad strategy”?  Rumelt identifies four underlying reasons:

  • A psychological unwillingness or inability to make choices (this can be linked with an organisation being too decentralised)
  • A growing tide of “template style” strategic planning, which gives too much attention to vision, mission, and values, rather than to hard analysis of a company’s situation
  • An over-emphasis on charismatic qualities in leaders
  • The superficially appealing “positive thinking” movement.

Rumelt’s treatment of “positive thinking” is particularly illuminating – especially for a reader like me who harbours many sympathies for the idea that it’s important to maintain a positive, upbeat attitude.  Rumelt traces the evolution of this idea over more than a century:

This fascination with positive thinking, and its deep connection to inspirational and spiritual thought, was invented around 150 years ago in New England as a mutation of Protestant Christian individualism…

The amazing thing about [the ideology of positive thinking] is that it is always presented as if it were new!  And no matter how many times the same ideas are repeated, they are received by many listeners with fresh nods of affirmation.  These ritual recitations obviously tap into a deep human capacity to believe that intensely focused desire is magically rewarded…

I do not know whether meditation and other inward journeys perfect the human soul.  But I do know that believing … that by thinking only of success you can become a success, is a form of psychosis and cannot be recommended as an approach to management or strategy.  All [good] analysis starts with the consideration of what might happen, including unwelcome events.  I would not care to fly in an aircraft designed by people who focused only on an image of a flying machine and never considered modes of failure…

The doctrine that one can impose one’s visions and desires on the world by thought alone retains a powerful appeal to many people.  Its acceptance displaces critical thinking and good strategy.

As well as pointing out flaws in bad strategy, Rumelt provides wide-ranging clear advice on what good strategy contains:

A good strategy works by harnessing power and applying it where it will have the greatest effect.  In the short term, this may mean attacking a problem or rival with adroit combinations of policy, actions, and resources.  In the longer term, it may involve cleverly using policies or resource commitments to develop capabilities that will be of value in future contests.  In either case, a “good strategy” is an approach that magnifies the effectiveness of actions by finding and using sources of power…

Strategic leverage arises from a mixture of anticipation, insight into what is most pivotal or critical in a situation, and making a concentrated application of effort…

A much more effective way to compete is the discovery of hidden power in the situation.

Later chapters amplify these ideas by providing many illuminating suggestions for how to build an effective strategy.  Topics covered include proximate objectives, chain-link systems, design, focus (“pivot points”), competitive advantage, anticipation and exploitation of industry trends (“dynamics”), and inertia and entropy.  Here are just a few illustrative snippets from these later chapters:

In building sustained strategic advantage, talented leaders seek to create constellations of activities that are chain-linked.  This adds extra effectiveness to the strategy and makes competitive imitation difficult…

Many effective strategies are more designs than decisions – are more constructed than chosen..

When faced with a corporate success story, many people ask, “How much of the success was skill and how much was luck?”  The saga of Cisco Systems vividly illustrates that the mix of forces is richer than just skill and luck.  Absent the powerful waves of change sweeping through computing and telecommunications, Cisco would have remained a small niche player.  Cisco’s managers and technologists were very skillful at identifying and exploiting these waves of change…

An organisation’s greatest challenge may not be external threats or opportunities, but instead the effects of entropy and inertia.  In such a situation, organisational renewal becomes a priority.  Transforming a complex organisation is an intensely strategic challenge.  Leaders must diagnose the causes and effects of entropy and inertia, create a sensible guiding policy for effecting change, and design a set of coherent actions designed to alter routines, culture, and the structure of power and influence.

You can read more on the book’s website.

The book is addressed to people working within organisations, with responsibility for strategy in these organisations.  However, most of the advice is highly valid for individuals too.  Are the big personal goals we set ourselves merely “wishful thinking”, or are they grounded in a real analysis of our own personal situation?  Do they properly take account of our personal trends, inertia, entropy, and sources of competitive power?

28 December 2010

Some suggested books for year-end reading

Looking for suggestions on books to read, perhaps over the year-end period of reflection and resolution for renewal?

Here are my comments on five books I’ve finished over the last few months, each of which has given me a lot to think about.

Switch: How to change things when change is hard – by Chip & Dan Heath

I had two reasons for expecting I would like this book:

I was not disappointed.  The book is full of advice that seems highly practical – advice that can be used to overcome all kinds of obstacles that people encounter when trying to change something for the better.  The book helpfully lists some of these obstacles in a summary chapter near its end.  They include:

  • “People here don’t see the need for change”
  • “People resist my idea because they say, ‘We’ve never done it like that before'”
  • “We should do doing something, but we’re getting bogged down in analysis”
  • “The environment has shifted, and we need to overcome our old patterns of behaviour”
  • “People here simply aren’t motivated to change”
  • “People here keep saying ‘It will never work'”
  • “I know what I should be doing, but I’m not doing it”
  • “I’ll change tomorrow”…

Each chapter has profound insights.  I particularly liked the insight that, from the right perspective, the steps to create a solution are often easier than the problem itself.  This is a pleasant antidote to the oft-repeated assertion that solutions need to be more profound, more complex, or more sophisticated, that the problems they address.  On the contrary, change efforts frequently fail because the change effort is focussing on the wrong part of the big picture.  You can try to influence either the “rider”, the “elephant”, or the “path” down which the elephant moves.  Spend your time trying to influence the wrong part of this combo, and you can waste a great deal of energy.  But get the analysis right, and even people who appear to hate change can embrace a significant transformation.  It all depends on the circumstance.

The book offers nine practical steps – three each for the three different parts of this model:

  • Direct the rider: Find the bright spots; Script the critical moves; Point to the destination
  • Motivate the elephant: Find the feeling; Shrink the change; Grow your people
  • Shape the path: Tweak the environment; Build habits; Rally the herd.

These steps may sound trite, but these simple words summarise, in each case, a series of inspirational examples of real-world change.

The happiness advantage: The seven principles of positive psychology that fuel success and performance at work – by Shawn Achor

“The happiness advantage” shares with “Switch” the fact that it is rooted in the important emerging discipline of positive psychology.  But whereas “Switch” addresses the particular area of change management, “The happiness advantage” has a broader sweep.  It seeks to show how a range of recent findings from positive psychology can be usefully applied in a work setting, to boost productivity and performance.  The author, Shawn Achor, describes many of these findings in the context of the 10 years he spent at Harvard.  These findings include:

  • Rather than the model in which people work hard and then achieve success and then become happy, the causation goes the other way round: people with a happy outlook are more creative, more resilient, and more productive, are able to work both harder and smarter, and are therefore more likely to achieve success in their work (Achor compares this reversal of causation to the “Copernican revolution” which saw the sun as the centre of the solar system, rather than the earth)
  • Our character (including our degree of predisposition to a happy outlook) is not fixed, but can be changed by activity – this is an example of neural plasticity
  • “The Tetris effect”: once you train your brain to spot positive developments (things that merit genuine praise), that attitude increasingly becomes second nature, with lots of attendant benefits
  • Rather than a vibrant social support network being a distraction from our core activities, it can provide us with the enthusiasm and the community to make greater progress
  • “Falling up”: the right mental attitude can gain lots of advantage from creative responses to situations of short-term failure
  • “The Zorro circle”: rather than focussing on large changes, which could take a long time to accomplish, there’s great merit in restricting attention to a short period of time (perhaps one hour, or perhaps just five minutes), and to a small incremental improvement on the status quo.  Small improvements can accumulate a momentum of their own, and lead on to big wins!
  • Will power is limited – and is easily drained.  So, follow the “20 second rule”: take the time to rearrange your environment – such as your desk, or your office – so that the behaviour you’d like to happen is the easiest (“the default”).  When you’re running on auto-pilot, anything that requires a detour of more than 20 seconds is much less likely to happen.  (Achor gives the example of taking the batteries out of his TV remote control, to make it less likely he would sink into his sofa on returning home and inadvertently watch TV, rather than practice the guitar as he planned.  And – you guessed it – he made sure the guitar was within easy reach.)

You might worry that this is “just another book about the power of positive thinking”.  However, I see it as a definite step beyond that genre.  This is not a book that seeks to paint on a happy face, or to pretend that problems don’t exist.  As Achor says, “Happiness is not the belief that we don’t need to change.  It is the realization that we can”.

Nonsense on stilts: how to tell science from bunk – by Massimo Pigliucci

Many daft, dangerous ideas are couched in language that sounds scientific.  Being able to distinguish good science from “pseudoscience” is sometimes called the search for a “demarcation principle“.

The author of this book, evolutionary biologist Massimo Pigliucci, has strong views about the importance of distinguishing science from pseudoscience.  To set the scene, he gives disturbing examples such as people who use scientific-sounding language to deny the connection between HIV and AIDS (and who often advocate horrific, bizarre treatments for AIDS), or who frighten parents away from vaccinating their children by quoting spurious statistics about links between vaccination and autism.  This makes it clear that the subject is far from being an academic one, just for armchair philosophising.  On the other hand, attempts by philosophers of science such as Karl Popper to identify a clear, watertight demarcation principle all seem to fail.  Science is too varied an enterprise to be capable of a simple definition.  As a result, it can take lots of effort to distinguish good science from bad science.  Nevertheless, this effort is worth it.  And this book provides a sweeping, up-to-date survey of the issues that arise.

The book brought me back to my own postgraduate studies from 1982-1986.  My research at that time covered the philosophy of mind, the characterisation of pseudo-science, creationism vs. Darwinism, and the shocking implications of quantum mechanics.  All four of these areas were covered in this book – and more besides.

It’s a book with many opinions.  I think it gets them about 85% right.  I particularly liked:

  • His careful analysis of why “Intelligent Design” is bad science
  • His emphasis on how pseudoscience produces no new predictions, but is intellectually infertile
  • His explanation of the problems of parapsychology (studies of extrasensory perception)
  • The challenges he lays down to various fields which appear grounded in mainstream science, but which are risking divergence away from scientific principles – fields such as superstring theory and SETI (the search for extraterrestrial intelligence).

Along the way, Pigliucci shares lots of fascinating anecdotes about the history of science, and about the history of philosophy of science.  He’s a great story-teller.

The master switch: the rise and fall of information empires – by Tim Wu

Whereas “Nonsense on stilts” surveys the history of science, and draws out lessons about the most productive ways to continue to find out deeper truths about the world, “The master switch” surveys many aspects of the modern history of business, and draws out lessons about the most productive ways to organise society so that information can be shared in the most effective way.

The author, Tim Wu, is a professor at Columbia Law School, and (if anything) is an even better story-teller than Pigliucci.  He gives rivetting accounts of many of the key episodes in various information businesses, such as those based on the telephone, radio, TV, cinema, cable TV, the personal computer, and the Internet.  Lots of larger-than-life figures stride across the pages.  The accounts fit together as constituents of an over-arching narrative:

  • Control over information technologies is particularly important for the well-being of society
  • There are many arguments in favour of centralised control, which avoids wasteful inefficiencies of competition
  • Equally, there are many arguments in favour of decentralised control, with open access to the various parts of the system
  • Many information industries went through one (or more phases) of decentralised control, with numerous innovators working independently, before centralisation took place (or re-emerged)
  • Government regulation sometimes works to protect centralised infrastructure, and sometimes to ensure that adequate competition takes place
  • Opening up an industry to greater competition often introduces a period of relative chaos and increased prices for consumers, before the greater benefits of richer innovation have a chance to emerge (often in unexpected ways)
  • The Internet is by no means the first information industry for which commentators had high, idealistic hopes: similar near-utopian visions also accompanied the emergence of broadcast radio and of cable television
  • A major drawback of centralised control is that too much power is vested in just one place – in what can be called a “master switch” – allowing vested interests to drastically interfere with the flow of information.

AT&T – the company founded by Bell – features prominently in this book, both as a hero, and as a villain.  Wu describes how AT&T suppressed various breakthrough technologies (including magnetic disk recording, usable in answering machines) for many years, out of a fear that they would damage the company’s main business.  Similarly, RCA suppressed FM radio for many years, and also delayed the adoption of electronic television.  Legal delays were often a primary means to delay and frustrate competitors, whose finances lacked such deep pockets.

Wu often highlights ways in which business history could have taken different directions.  The outcome that actually transpired was often a close-run thing, compared to what seemed more likely at the time.  This emphasises the contingent nature of much of history, rather than events being inevitable.  (I know this from my own experiences at Symbian.  Recent articles in The Register emphasise how Symbian nearly died at birth, well before powering more than a quarter of a billion smartphones.  Other stories, as yet untold, could emphasise how the eventual relative decline of Symbian was by no means a foretold conclusion either.)

But the biggest implications Wu highlights are when the stories come up to date, in what he sees as a huge conflict between powers that want to control modern information technology resources, and those that prefer greater degrees of openness.  As Wu clarifies, it’s a complex landscape, but Apple’s iPhone approach aims at greater centralised design control, whereas Google’s Android approach aims at enabling a much wider number of connections – connections where many benefits arise, without the need to negotiate and maintain formal partnerships.

Compared to previous information technologies, the Internet has greater elements of decentralisation built into it.  However, the lessons of the previous chapters in “The master switch” are that even this decentralisation is vulnerable to powerful interests seizing control and changing its nature.  That gives greater poignancy to present-day debates over “network neutrality” – a term that was coined by Wu in a paper he wrote in 2002.

Sex at dawn: the prehistoric origins of modern sexuality – by Christopher Ryan and Cacilda Jetha

(Sensitive readers should probably stop reading now…)

In terms of historical sweep, this last book outdoes all the others on my list.  It traces the origins of several modern human characteristics far into prehistory – to the time before agriculture, when humans existed as nomadic hunter-gatherers, with little sense of personal exclusive ownership.

This book reminds me of this oft-told story:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

I’ve read a lot on evolution over the years, and I think the evidence husband and wife authors Christopher Ryan and Cacilda Jetha accumulate chapter after chapter, in “Sex at dawn”, is reasonably convincing – even though elements of present day “polite society” may well prefer this evidence not to become “generally known”.  The authors tell a story with many jaw-dropping episodes.

Among other things, the book systematically challenges the famous phrase from Thomas Hobbes in Leviathan that, absent a government, people would lead lives that were “solitary, poor, nasty, brutish, and short”.  On the contrary, the book marshals evidence, direct and indirect, that pre-agricultural people could enjoy relatively long lives, with ample food, and a strong sense of community.  Key to this mode of existence was “fierce sharing”, in which everyone felt a strong obligation to share food within the group … and not only food.  The X-rated claim in the book is that the sharing extended to “parallel multi-male, multi-female sexual relationships”, which bolstered powerful community identities.  Monogamy is, therefore, far from being exclusively “natural”.  Evidence in support of this conclusion includes:

  • Comparisons to behaviour in bonobos and chimps – the apes which are our closest evolutionary cousins
  • The practice in several contemporary nomadic tribes, in which children are viewed as having many fathers
  • Various human anatomical features, copulatory behaviour, aspects of sperm wars, etc.

In this analysis, human sexual nature developed under one set of circumstances for several million years, until dramatic changes in relatively recent times with the advent of agriculture, cities, and widespread exclusive ownership.  Social philosophies (including religions) have sought to change the norms of behaviour, with mixed success.

I’ll leave the last words to Ryan and Jetha, from their online FAQ:

We’re not recommending anything other than knowledge, introspection, and honesty. In fact, as we say in the book, we’re not really sure what to do with this information ourselves.

15 October 2010

Radically improving nature

Filed under: death, evolution, UKH+ — David Wood @ 10:50 pm

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man – George Bernard Shaw

Changing the world is ambitious.  Changing nature is even more ambitious.

After all, nature is the output of countless generations of refinement by natural selection.  Evolution has found many wonderful solutions.  But natural selection generally only finds local optima.  As I’ve written on a previous occasion:

In places where an intelligent (e.g. human) designer would “go back to the drawing board” and introduce a new design template, biological evolution has been constrained to keep working with the materials that are already in play.  Biological evolution lacks true foresight, and cannot do what human designers would call “re-factoring an existing design”.

And as I covered in my review “The human mind as a flawed creation of nature” of the book by Gary Marcus, “Kluge – the haphazard construction of the human mind”:

The basic claim of the book is that many aspects of the human mind operate in clumsy and suboptimal ways – ways which betray the haphazard and often flawed evolutionary history of the mind….

The framework is, to me, both convincing and illuminating.  It provides a battery of evidence relevant to what might be called “The Nature Delusion” – the pervasive yet often unspoken belief that things crafted by nature are inevitably optimal and incapable of serious improvement.

For these reasons, I applaud thoughtful attempts to improve human nature – whether by education, meditation, diet and smart drugs, silicon co-processors for our biological brains, genetic re-engineering, and so on.  With sufficient overall understanding, we can use the best outputs of human thought to create even better humans.

But what about the rest of nature?  If we can consider creating better humans, what about creating better animals? If the technology of the near future can add 50 points, or more, to our human IQs, could we consider applying similar technological enhancements to dolphins, dogs, parrots, and so on?

There are various motivations to considering this question.  First, there are people who deeply love their pets, and who might wish to enhance the capabilities of their pets, in a manner akin to enhancing the capabilities of their children.  Someone might wonder, if my dog could speak to me, what would it say?

In a way, the experiments to teach chimps sign language already take steps down this direction.  (Some chimps that learned sign language seem in turn to have taught elements of it to their own children.)

A different motivation to consider altering animal nature is the sheer amount of horrific pain and trauma throughout the animal kingdom.  Truly is “nature, red in tooth and claw“.

In his essay “The end of suffering“, British philosopher David Pearce quotes Richard Dawkins from the 1995 book River Out of Eden: A Darwinian View of Life:

During the minute it takes me to compose this sentence, thousands of animals are being eaten alive; others are running for their lives, whimpering with fear; others are being slowly devoured from within by rasping parasites; thousands of all kinds are dying from starvation, thirst and disease. It must be so. If there is ever a time of plenty, this very fact will automatically lead to an increase in population until the natural state of starvation and misery is restored.

But Pearce takes issue with Dawkins:

“It must be so.” Is Richard Dawkins right? Are the cruelties of the food chain an inescapable fact of Nature: no more changeable than, say, Planck’s constant or the Second Law of Thermodynamics? The Transhumanist Declaration expresses our commitment to the “well-being of all sentience”. Yet do these words express merely a pious hope – or an engineering challenge?

My own recent work involves exploring some of the practical steps entailed by compassionate ecosystem redesign – cross-species immunocontraception, genomic rewrites, cultured meat, neurochips, global surveillance and wildlife tracking technologies, and the use of nanorobots for marine ecosystems. Until this century, most conceivable interventions to mitigate the horrors of Nature “red in tooth and claw” would plausibly do more harm than good. Rescue a herbivore [“prey”] and a carnivore [“predator”] starves. And if, for example, we rescue wild elephants dying from hunger or thirst, the resultant population explosion would lead to habitat degradation, Malthusian catastrophe and thus even greater misery. Certainly, the computational power needed to micromanage the ecosystem of a medium-sized wildlife park would be huge by today’s standards. But recall that Nature supports only half a dozen or so “trophic levels”; and only a handful of “keystone predators” in any given habitat. Creating a truly cruelty-free living world may cost several trillion dollars or more. But the problem is computationally tractable within this century – if we acknowledge that wild animal suffering matters.

David’s fan page on Facebook boldly includes the forecast:

“I predict we will abolish suffering throughout the living world”

Unreasonable? Probably. Scientifically credible? Perhaps. Noble? Definitely. Radical? This is about as radical as it gets. Thoughtful? Read David’s own writings and make up your own mind.

Alternatively, if you’re in or nearby London, come along to this month’s UKH+ meeting (tomorrow, Saturday 16th October), where David will be the main speaker.  He wrote the following words to introduce what he’ll be talking about:

The Transhumanist Declaration advocates “the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise.” Yet is “the well-being of all sentience” serious science – or just utopian dreaming? What does such a commitment entail? On what kind of realistic timeframe might we command enough computational power to police an entire ecosystem?

In this talk, the speaker wants to review recent progress in understanding the neurobiology of pleasure, pain and our core emotions. Can mastery of our reward circuitry ever deliver socially responsible, intelligent bliss rather than crude wireheading? He also wants to examine and respond to criticisms of the abolitionist project that have been levelled over the past decade – and set out the biggest challenges, as he sees them, to the prospect of a totally cruelty-free world.

19 September 2010

Our own entrenched enemies of reason

Filed under: books, deception, evolution, intelligence, irrationality, psychology — David Wood @ 3:39 pm

I’m a pretty normal, observant guy.  If there was something as large as an elephant in that room, then I would have seen it – sure as eggs are eggs.  I don’t miss something as large as that.  So someone who says, afterwards, that there was an elephant there, must have some kind of screw loose, or some kind of twisted ulterior motivation.  Gosh, what kind of person are they?

Here’s another version of the same, faulty, line of reasoning:

I’m a pretty good police detective.  Over the years, I’ve developed the knack of knowing when people are telling the truth.  That’s what my experience has taught me.  I know when a confession is for real.  I don’t get things like that wrong.  So someone who says, afterwards, that the confession was forced, or that the criminal should get off on a technicality, must have some kind of screw loose, or some kind of twisted ulterior motivation.  Gosh, what kind of person are they?

And another:

I’m basically a moral person.  I don’t knowingly cause serious harm to my fellow human beings.  I don’t get things as badly wrong as that.  I’m not that kind of person.  So if undeniable evidence subsequently emerges that I really did seriously harm a group of people, well, these people must have deserved it.  They were part of a bad crowd.  I was actually doing society a favour.  Gosh, don’t you know, I’m one of the good guys.

Finally, consider this one:

I’m basically a savvy, intelligent person.  I don’t make major errors in reasoning.  If I take the time to investigate a religion and believe in it, I must be right.  All that investment of time and belief can’t have been wrong.  Perish the thought.  If that religion makes a prophecy – such as the end of the world on a certain date – then I must be right to believe it.  If the world subsequently appears not to have ended on that date, then it must have been our faith, and our actions, that saved the world after all.  Or maybe the world ended in an invisible, but more important way.  The kingdom of heaven has been established within. Either way, how right we were!

It can sometimes be fun to observe the self-delusions of the over-confident.  Psychologists talk about “cognitive dissonance”, when someone’s deeply held beliefs appear to be contradicted by straightforward evidence.  That person is forced to hold two incompatible viewpoints in mind at the same time: I deeply believe X, but I seem to observe not-X.  Most people are troubled by this kind of dissonance.  It’s psychologically uncomfortable.  And because it can be hard for them to give up their underlying self-belief that “If I deeply believe X, I must have good reasons to do so”, it can lead them into outlandish hoops and illogical jumps to deny the straightforward evidence.  For them, rather than “seeing is believing”, the saying becomes inverted: “believing is seeing”.

As I said, it can be fun to see the daft things people have done, to resolve their cognitive dissonance in favour of maintaining their own belief in their own essential soundness, morality, judgement, and/or reasoning.  It can be especial fun to observe the mental gymnastics of people with fundamentalist religious and/or political faith, who refuse to accept plain facts that contradict their certainty.  The same goes for believers in alien abduction, for fan boys of particular mobile operating systems, and for lots more besides.

But this can also be a deadly serious topic:

  • It can result in wrongful imprisonments, with the prosecutors unwilling to face up to the idea that their over-confidence was misplaced.  As a result, people spend many years of their life unjustly incarcerated.
  • It can result in families being shattered under the pressures of false “repressed memories” of childhood abuse, seemingly “recovered” by hypnotists and subsequently passionately believed by the apparent victims.
  • It can split up previously happy couples, who end up being besotted, not with each other, but with dreadful ideas about each other (even though “there’s always two sides to a story”).
  • Perhaps worst of all, it can result in generations-long feuds and wars – such as the disastrous entrenched enmity of the Middle East – with each side staunchly holding onto the view “we’re the good guys, and anything we did to these other guys was justified”.

Above, I’ve retold some of the thoughts that occurred to me as I recently listened to the book “Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts”, by veteran social psychologists Carol Tavris and Elliot Aronson.  (See here for this book’s website.)  At first, I found the book to be a very pleasant intellectual voyage.  It described, time and again, experimental research that should undermine anyone’s over-confidence about their abilities to observe, remember, and reason.  (I’ll come back to that research in a moment).  It reviewed real-life examples of cognitive dissonance – both personal examples and well-known historical examples.  So far, so good.  But later chapters made me more and more serious – and, frankly, more and more angry – as they explored horrific examples of miscarriages of justice (the miscarriage being subsequently demonstrated by the likes of DNA evidence), family breakups, and escalating conflicts and internecine violence.  All of this stemmed from faulty reasoning, brought on by self-justification (I’m not the kind of person who could make that kind of mistake) and by over-confidence in our own thinking skills.

Some of the same ground is covered in another recent book, “The invisible gorilla – and other ways our intuition deceives us”, by Christopher Chabris and Daniel Simons.  (See here for the website accompanying this book.)  The gorilla in the title refers to the celebrated experiment where viewers are asked to concentrate on one set of activity – counting the number of passes made by a group of basketball players – and often totally fail to notice someone in a gorilla suit wandering through the crowd of players.  Gorilla?  What gorilla?  Don’t be stupid!  If there had been a gorilla there, I would have seen it, sure as eggs are eggs.

Chapter by chapter, “The invisible gorilla” reviews evidence that we tend to be over-confident in our own abilities to observe, remember, and reason.  The chapters cover:

  • Our bias to think we would surely observe anything large and important that happened
  • Our bias to think our memories are reliable
  • Our bias to think that people who express themselves confidently are more likely to be trustworthy
  • Our bias to think that we would give equal weight to evidence that contradicts our beliefs, as to evidence that supports our beliefs (the reality is that we search high and low for confirming evidence, and quickly jump to reasons to justify ignoring disconfirming evidence)
  • Our bias to think that correlation implies causation: that if event A is often followed by event B, then A will be the cause of B
  • Our bias to think there are quick fixes that will allow significant improvements in our thinking power – such as playing classical music to babies (an effect that has been systematically discredited)
  • Our bias to think we can do many things simultaneously (“multi-task”) without any individual task being affected detrimentally.

These biases probably all were useful to Homo sapiens at an early phase of our evolutionary history.  But in the complex society of the present day, these biases do us more harm than good.

Added together, the two books provide sobering material about our cognitive biases, and about the damage that all too often follows from us being unaware of these biases.

“Mistakes were made (but not by me)” adds the further insight that we tend to descend gradually into a state of gross over-confidence.  The book frequently refers to the metaphor of a pyramid.  Before we make a strong commitment, we are often open-minded.  We could go in several different directions.  But once we start down any of the faces in the pyramid, it becomes harder and harder to retract – and we move further away from people who, initially, were in the very same undecided state as us.  The more we follow a course of action, the greater our commitment to defend all the time and energy we’ve committed down that path.  I can’t have taken a wrong decision, because if I had, I would have wasted all that time and energy, and that’s not the kind of person I am. So they invest even more time and energy, walking yet further down that pyramid of over-confidence, in order to maintain their own self-image.

At root, what’s going wrong here is what psychologists call self-justification.  Once upon a time, the word pride would have been used.  We can’t bear to realise that our own self-image is at fault, so we continue to take actions – often harmful actions – in support of our self-image.

The final chapters of both books offer hope.  They give examples of people who are able to break out of this spiral of self-justification.  It isn’t easy.

An important conclusion is that we should put greater focus on educating people about cognitive biases.  Knowing about a cognitive bias doesn’t make us immune to it, but it does help – especially when we are still only a few rungs down the face of the pyramid.  As stated in the conclusion of “The invisible gorilla”:

One of our messages in this book is indeed negative: Be wary of your intuitions, especially intuitions about how your own mind works.  Our mental systems for rapid cognition excel at solving the problems they evolved to solve, but our cultures, societies, and technologies today are much more complex than those of our ancestors.  In many cases, intuition is poorly adapted to solving problems in the modern world.  Think twice before you decide to trust intuition over rational analysis, especially in important matters, and watch out for people who tell you intuition can be a panacea for decision-making ills…

But we also have an affirmative message to leave you with.  You can make better decisions, and maybe even get a better life, if you do your best to look for the invisible gorillas in the world around you…  There may be important things right in front of you that you aren’t noticing due to the illusion of attention.  Now that you know about this illusion, you’ll be less apt to assume you’re seeing everything there is to see.  You may think you remember some things much better than you really do, because of the illusion of memory.  Now that you understand this illusion, your trust your own memories, and that of others, a bit less, and you’ll try to corroborate your memory in important situations.  You’ll recognise that the confidence people express often reflects their personalities rather than their knowledge, memory, or abilities…  You’ll be skeptical of claims that simple tricks can unleash the untapped potential in your mind, but you’ll be aware than you can develop phenomenal levels of expertise if you study and practice the right way.

Similarly, we should also take more care to widely explain the benefits of the scientific approach, which searches for disconfirming evidence as must as it searches for confirming evidence.

That’s the pro-reason approach to encouraging better reasoning.  But reason, by itself, often isn’t enough.  If we are going to face up to the fact that we’ve made grave errors of judgement, which have caused pain, injustice, and sometimes even death and destruction, we frequently need powerful emotional support.  To enable us to admit to ourselves that we’ve made major mistakes, it greatly helps if we can find another image of ourselves, which sees us as making better contributions in the future.  That’s the pro-hope approach to encouraging better reasoning.  The two books have examples of each approach.  Both books are well worth reading.  At the very least, you may get some new insight as to why discussions on Internet forums often descend into people seemingly talking past each other, or why formerly friendly colleagues can get stuck into an unhelpful rut of deeply disliking each other.

11 September 2010

No escape from technology

Filed under: books, evolution, Kurzweil, UKH+ — David Wood @ 1:51 am

We can never escape the bio-technological nexus and get “back to nature” – because we have never lived in nature.

That sentence, from the final chapter of Timothy Taylor’s “The Artificial Ape: How technology changed the course of human evolution“, sums up one of my key takeaways from this fine book.

It’s a book that’s not afraid to criticise giants.  Aspects of Charles Darwin’s thinking are examined and found wanting.  Modern day technology visionary Ray Kurzweil also comes under criticism:

The claims of Ray Kurzweil (that we are approaching a critical moment when biology will be overtaken by artificial constructs) … lack a critical historical – and prehistoric – perspective…

Kurzweil argues that the age of machines is upon us …  and that technology is reaching a point where it can innovate itself, producing ever more complex forms of artificial intelligence.  My argument in this book is that, scary or not, none of this is new.  Not only have we invented technology, from the stone tools to the wheeled wagon, from spectacles to genetic engineering, but that technology, within a framework of some 2 to 3 million years, has, physically and mentally, made us.

Taylor’s book portrays the emergence of humanity as a grand puzzle.  From a narrow evolutionary perspective, humans should not have come into existence.  Our heads are too large. In many cases, they’re too large to pass through the narrow gap in their mother’s pelvis.  Theory suggests, and fossils confirm, that the prehistoric change from walking on all fours to walking upright had the effect of narrowing this gap in the pelvis.  The resulting evolutionary pressures should have resulted in smaller brains.  Yet, after several eons, the brain, instead, became larger and larger.

That’s just the start of the paradox.  The human baby is astonishingly vulnerable.  Worse, it makes its mother increasingly vulnerable too.  How could “survival of the fittest” select this ridiculously unfit outcome?

Of course, a larger brain has survival upsides as well as survival downsides.  It enables greater sociality, and the creation of sophisticated tools, including weapons.  But Taylor marshalls evidence that suggests that the first use of tools by pre-humans long pre-dated the growth in head size.  This leads to the suggestion that two tools, in particular, played vital roles in enabling the emergence of the larger brain:

  • The invention of a slings, made from fur, that enabled mothers to carry their infants hands-free
  • The invention of cooking, with fire, that made it easier for nourishment to be quickly obtained from food.

To briefly elaborate the second point: walking upright means the digestive gut becomes compressed.  It becomes shorter.  There’s less time for nourishment to be extracted from food.  Moreover, a larger head increases the requirements for fast delivery of nourishment.  Again, from a narrow evolutionary point of view, the emergence of big-brained humans makes little sense.  But cooking comes to the rescue.  Cooking, along with the child-carrying sling, are two examples of technology that enable the emergence of humans.

The resulting creatures – us – are weaker in a pure biological sense that our evolutionary forebears.  Without our technological aides, we would fare poorly in any contest of survival with other apes.  It is only the combination of technology-plus-nature that makes us stronger.

We’re used to thinking that the development of tools took place in parallel with increasing pre-human intelligence.  Taylor’s argument is that, in a significant way, the former preceded the latter.  Without the technology, the pre-human brain could not expand.

The book uses this kind of thinking to address various other puzzles:

  • For example, the technology-impoverished natives from the tip of South America that Darwin met on his voyage of discovery on the Beagle, had eyesight that was far better than even the keenest eyed sailor on the ship.  Technological progress went hand-in-hand with a weakening of biological power.
  • Taylor considers the case of the aborigines of Tasmania, who were technologically backward compared to those of mainland Australia: they lacked all clothing, and apparently could not make fire for themselves.  The archeological record indicates that the Tasmanian aborigines actually lost the use of various technologies over the course of several millenia.  Taylor reaches a different conclusion from popular writer Jared Diamond, who seems to take it for granted that this loss of technology made the aborigines weaker.  Taylor suggests that, in many ways, these aborigines became stronger and fitter, in their given environment, as they abandoned their clothing and their fishing tools.

There are many other examples – but I’ll leave it to you to read the book to find out more.  The book also has some fascinating examples of ancient tools.

I think that Taylor’s modifications of Darwin’s ideas are probably right.  What of his modifications of Kurzweil’s ideas?  Is the technological spurt of the present day really “nothing new”?  Well, yes and no.  I believe Kurzweil is correct to point out that the kinds of changes that are likely to be enabled by technology in the relatively near future – perhaps in the lifetime of many people who are already alive – are qualitatively different from anything that has gone before:

  • Technology might extend our lifespans, not just by a percentage, but by orders of magnitude (perhaps indefinitely)
  • Technology might create artificial intelligences that are orders of magnitude more powerful than any intelligence that has existed on this planet so far.

As I’ve already mentioned in my previous blogpost – which I wrote before starting to read Taylor’s book – Timothy Taylor is the guest speaker at the September meeting of the UK chapter of Humanity+.  People who attend will have the chance to hear more details of these provocative theories, and to query them direct with the author.  There will also be an opportunity to purchase signed copies of his book.  I hope to see some of you there!

I’ll give the last words to Dr Taylor:

Technology, especially the baby-carrying sling, allowed us to push back our biological limits, trading in our physical strength for an increasingly retained infantile early helplessness that allowed our brains to expand, forming themselves under increasingly complex artificial conditions…  In terms of brain growth, the high-water mark was passed some 40,000 years ago.  The pressure on that organ has been off ever since we started outsourcing intelligence in the form of external symbolic storage.  That is now so sophisticated through the new world information networking systems that what will emerge in future may no longer be controlled by our own volition…

[Technology] could also destroy our planet.  But there is no back-to-nature solution.  There never has been for the artificial ape.

29 August 2010

Understanding humans better by understanding evolution better

Filed under: collaboration, deception, evolution, RSA, UKH+ — David Wood @ 5:54 am

Many aspects of human life that at first seem weird and hard to explain can make a lot more sense once you see them from the viewpoint of evolution.

It was Richard Dawkins’ book “The Selfish Gene” which first led me to that conclusion, whilst I was still at university.  After “The Selfish Gene”, I read “Sociobiology: the new synthesis“, by E.O. Wilson, which gave other examples.  I realised it was no longer necessary to refer to concepts such as “innate wickedness” or “original sin” to explain why people often did daft things.  Instead, people do things because (in part) of underlying behavioural patterns which tended to make their ancestors more likely to leave successful offspring.

In short, you can deepen your understanding of  humans if you understand evolution.  On the whole, attempts to get humans to change their behaviour will be more likely to succeed if they are grounded in an understanding of the real factors that led humans to tend to behave as they do.

What’s more, you can understand humans better if you understand evolution better.

In a moment, I’ll come to some interesting new ideas about the role played by technology in evolution.  But first, I’ll mention two other ways in which an improved understanding of evolution sheds richer light on the human condition.

1. Evolution often results in sub-optimal solutions

In places where an intelligent (e.g. human) designer would “go back to the drawing board” and introduce a new design template, biological evolution has been constrained to keep working with the materials that are already in play.  Biological evolution lacks true foresight, and cannot do what human designers would call “re-factoring an existing design”.

I’ve written on this subject before, in my review “The human mind as a flawed creation of nature” of the book by Gary Marcus, “Kluge – the haphazard construction of the human mind” – so I won’t say much more about that particular topic right now.  But I can’t resist including a link to a fascinating video in which Richard Dawkins demonstrates the absurdly non-optimal route taken by the laryngeal nerve of the giraffe.  As Dawkins says in the video, this nerve “is a beautiful example of historical legacy, as opposed to design”.  If you haven’t seen this clip before, it’s well worth watching, and thinking about the implications.

2. Evolution can operate at multiple levels

For a full understanding of evolution, you have to realise it can operate at multiple levels:

  • At the level of individual genes
  • At the level of individual organisms
  • At the level of groups of cooperating organisms.

At each level, there are behaviours which exist because they made it more likely for an entity (at that level) to leave descendants.  For example, groups of animals tend to survive as a group, if individuals within that group are willing, from time to time, to sacrifice themselves for the sake of the group.

The notion of group selection is, however, controversial among evolutionary theorists.  Part of the merit of books such as The Selfish Gene was that it showed how altruistic behaviour could be explained, in at least some circumstances, by looking at the point of survival of individual genes.  If individual A sacrifices himself for the sake of individuals B and C within the same group, it may well be that B and C carry many of the same genes as individual A.  This analysis seems to deal with the major theoretical obstacle to the idea of group selection, which is as follows:

  • If individuals A1, A2, A3,… all have an instinct to sacrifice themselves for the sake of their wider group, it may well mean, other things being equal, that this group is initially more resilient than competing groups
  • However, an individual A4 who is individually selfish, within that group, will get the benefit of the success of the group, and the benefit of individual survival
  • So, over time, the group will tend to contain more individuals like the “free-rider” A4, and fewer like A1, A2, and A3
  • Therefore the group will degenerate into selfish behaviour … and this shows that the notion of “group selection” is flawed.

Nevertheless, I’ve been persuaded by writer David Sloan Wilson that the notion of group selection can still apply.  He gives an easy-to-read account of his ideas in his wide-ranging book “Evolution for Everyone: How Darwin’s Theory Can Change the Way We Think About Our Lives“.  In summary:

  • Group selection can apply, provided the group also has mechanisms to reduce free-riding behaviour by individuals
  • For example, people in the group might have strong instincts to condemn and punish people who try to take excess advantage of the generosity of others
  • So long as these mechanisms keep the prevalence of free-riding below a certain threshold, a group can reach a stable situation in which the altruism of the majority continues to benefit the group as a whole.

(To be clear: this kind of altruism generally looks favourably only at others within the same group.  People who are outside your group won’t benefit from it.  An injunction such as “love your neighbour as yourself” applied in practice only to people within your group – not to people outside it.)

To my mind, this makes sense of a great deal of the mental gymnastics that we can observe: people combine elements of surreptitiously trying to benefit themselves (and their own families) whilst seeking to appearing to the group as a whole as being “good citizens”.  In turn, we are adept at seeing duplicity and hypocrisy in others.  There’s been a long “arms race” in which brains have been selected that are better at playing both sides of this game.

Incidentally, for another book that takes an entertaining and audacious “big picture” view of evolution and group selection, see the barn-storming “The Lucifer Principle: A Scientific Expedition into the Forces of History” by Howard Bloom.

3. The role of technology in evolution

At first sight, technology has little to do with evolution.  Evolution occurred in bygone times, whilst technology is a modern development – right?

Not true. First, evolution is very much a present-day phenomenon (as well as something that has been at work throughout the whole history of life).  Diseases evolve rapidly, under pressures of different regimes of anti-bacterial cocktails.  And there is evidence that biological evolution still occurs for humans.  A 2009 article in Time magazine was entitled “Darwin Lives! Modern Humans Are Still Evolving“.  Here’s a brief extract:

One study, published in PNAS in 2007 and led by John Hawks, an anthropologist at the University of Wisconsin at Madison, found that some 1,800 human gene variations had become widespread in recent generations because of their modern-day evolutionary benefits. Among those genetic changes, discovered by examining more than 3 million DNA variants in 269 individuals: mutations that allow people to digest milk or resist malaria and others that govern brain development.

Second, technology is itself an ancient phenomenon – including creative use of sticks and stones.  Benefits of very early human use of sticks and stones included fire, weapons, and clothing.  What’s more, the advantages of use of tools allowed a strange side-effect in human genetic evolution: as we became technologically stronger, we also became biologically weaker.  The Time magazine article mentioned above goes on to state the following:

According to anthropologist Peter McAllister, author of “Manthropology: the Science of Inadequate Modern Man“, the contemporary male has evolved, at least physically, into “the sorriest cohort of masculine Homo sapiens to ever walk the planet.” Thanks to genetic differences, an average Neanderthal woman, McAllister notes, could have whupped Arnold Schwarzenegger at his muscular peak in an arm-wrestling match. And prehistoric Australian Aborigines, who typically built up great strength in their joints and muscles through childhood and adolescence, could have easily beat Usain Bolt in a 100-m dash.

Timothy Taylor, Reader in Archaeology at the University of Bradford and editor-in-chief of the Journal of World Prehistory, tackles this same topic in his recent book “The Artificial Ape: How Technology Changed the Course of Human Evolution“.

Amazon.com describes this book as following:

A breakthrough theory that tools and technology are the real drivers of human evolution.

Although humans are one of the great apes, along with chimpanzees, gorillas, and orangutans, we are remarkably different from them. Unlike our cousins who subsist on raw food, spend their days and nights outdoors, and wear a thick coat of hair, humans are entirely dependent on artificial things, such as clothing, shelter, and the use of tools, and would die in nature without them. Yet, despite our status as the weakest ape, we are the masters of this planet. Given these inherent deficits, how did humans come out on top?

In this fascinating new account of our origins, leading archaeologist Timothy Taylor proposes a new way of thinking about human evolution through our relationship with objects. Drawing on the latest fossil evidence, Taylor argues that at each step of our species’ development, humans made choices that caused us to assume greater control of our evolution. Our appropriation of objects allowed us to walk upright, lose our body hair, and grow significantly larger brains. As we push the frontiers of scientific technology, creating prosthetics, intelligent implants, and artificially modified genes, we continue a process that started in the prehistoric past, when we first began to extend our powers through objects.

Weaving together lively discussions of major discoveries of human skeletons and artifacts with a reexamination of Darwin’s theory of evolution, Taylor takes us on an exciting and challenging journey that begins to answer the fundamental question about our existence: what makes humans unique, and what does that mean for our future?

In an interview in the New Scientist, Timothy Taylor gives more details of his ideas:

Upright female hominins walking the savannah had a real problem: their babies couldn’t cling to them the way a chimp baby could cling to its mother. Carrying an infant would have been the highest drain on energy for a hominin female – higher than lactation. So what did they do? I believe they figured out how to carry their newborns using a loop of animal tissue. Evidence of the slings hasn’t survived, but in the same way that we infer lungs and organs from the bones of fossils that survive, it is from the stone tools that we can infer the bits that don’t last: things made from sinew, wood, leather and grasses…

Once you have slings to carry babies, you have broken a glass ceiling – it doesn’t matter whether the infant is helpless for a day, a month or a year. You can have ever more helpless young and that, as far as I can see, is how encephalisation took place in the genus Homo. We used technology to turn ourselves into kangaroos. Our children are born more and more underdeveloped because they can continue to develop outside the womb – they become an extra-uterine fetus in the sling. This means their heads can continue to grow after birth, solving the smart biped paradox. In that sense technology comes before the ascent to Homo. Our brain expansion only really took off half a million years after the first stone tools. And they continued to develop within an increasingly technological environment…

I’ve ordered Taylor’s book from Amazon and I expect it to be waiting for me at my home in the UK once I return from my current trip in Asia.  I’m also looking forward to hosting a discussion meeting on Saturday 11th Sept under the auspices of Humanity+ UK in London, where Timothy Taylor himself will be the main speaker. People on Facebook can register their interest in this meeting by RSVPing here.  There’s no charge to attend.

Another option to see Timothy Taylor lecture in person – for those able to spare time in the middle of the day on a Thursday (9th Sept) – will be at the RSA.  I expect there will be good discussion at both events, but the session at H+UK is longer (two hours, as opposed to just one at the RSA), and I expect more questions there about matters such as the likely role of technology radically re-shaping the future development of humans.

Footnote: of course, the fact that evolution guided our ancestors to behave in certain ways is no reason for us to want to continue to behave in these ways.  But understanding the former is, in my view, very useful background knowledge for being to devise practical measures to change ourselves.

29 August 2009

The human mind as a flawed creation of nature

Filed under: books, evolution, happiness, intelligence, unconscious — David Wood @ 11:38 am

I’m sharing these thoughts after finishing reading Kluge – the haphazard construction of the human mind by NYU Professor of Psychology, Gary Marcus.

I bought this book after seeing it on the recommended reading list for the forthcoming 2009 Singularity Summit.  The quote from Bertrand Russell at the top of chapter 1 gave me warm feelings towards the book as soon as I started reading:

It has been said that man is a rational animal.  All my life I have been searching for evidence which could support this.

A few days later, I’ve finished the book, still with warm feelings.

(Alas, although I’ve started at least 20 books this year, I can only remember two others that I finished – reviewed here and here.  In part, I blame the hard challenges of my work life this year for putting unusual stress and strain on my reading habits.  In part, I blame the ease-of-distraction of Twitter, for cutting into time that I would previously have spent on reading.  Anyway, it’s a sign of how readable Kluge is, that I’ve made it all the way to the end so quickly.)

I first knew the word “Kluge” as “Kludge”, a term my software engineering colleagues in Psion often used.  This book explores the history of the term, as well as its different spellings.  The definition given is as follows:

Kluge – noun, pronounced klooj (engineering): a solution that is clumsy or inelegant yet surprisingly effective.

Despite their surface effectiveness, kluges have many limitations in practice.  Engineers who have sufficient time prefer to avoid kluges, and instead to design solutions that work well under a wider range of circumstances.

The basic claim of the book is that many aspects of the human mind operate in clumsy and suboptimal ways – ways which betray the haphazard and often flawed evolutionary history of the mind.  Many of the case studies quoted are familiar to me from previous reading (eg from Jonathan Haidt’s The Happiness Hypothesis and Timothy Wilson’s Strangers to Ourselves), but Gary Marcus fits the case studies together into a different framework.

The framework is, to me, both convincing and illuminating.  It provides a battery of evidence relevant to what might be called “The Nature Delusion” – the pervasive yet often unspoken belief that things crafted by nature are inevitably optimal and incapable of serious improvement.

A good flavour of the book is conveyed by some extracts from near the end:

In this book, we’ve discussed several bugs in our cognitive makeup: confirmation bias, mental contamination, anchoring, framing, inadequate self-control, the ruminative cycle, the focussing illusion, motivated reasoning, and false memory, not to mention absent-mindedness, an ambiguous linguistic system, and vulnerability to mental disorders.  Our memory, contextually driven as it is, is ill suited to many of the demands of modern life, and our self-control mechanisms are almost hopelessly split.  Our ancestral mechanisms were shaped in a different world, and our more modern deliberative mechanisms can’t shake the influence of that past.  In every domain we have considered, from memory to belief, choice, language, and pleasure, we have seen that a mind built largely through the progressive overlay of technologies is far from perfect.  None of these aspects of human psychology would be expected from an intelligent designer; instead, the only reasonable way to interpret them is as relics, leftovers of evolution.

In a sense, the argument I have presented here is part of a long tradition.  Stephen Jay Gould‘s notion of remnants of history, a key inspiration of this book, goes back to Darwin, who started his legendary work The Descent of Man with a list of a dozen “useless, or nearly useless” features – body hair, wisdom teeth, the vestigial tail bone known as the coccyx.  Such quirks of nature were essential to Darwin’s argument.

Yet imperfections of the mind have rarely been discussed in the context of evolution…

Scientifically, every kluge contains a clue to our past; wherever there is a cumbersome solution, there is insight into how nature layered our brain together; it is no exaggeration to say that the history of evolution is a history of overlaid technologies, and kluges help expose the seams.

Every kluge also underscores what is fundamentally wrong-headed about creationism: the presumption that we are the product of an all-seeing entity.  Creationists may hold on to the bitter end, but imperfection (unlike perfection) beggars the imagination.  It’s one thing to imagine an all-knowing engineer designing a perfect eyeball, another to imagine that engineer slacking off and building a half-baked spine.

There’s a practical side too: investigations into human idiosyncrasies can provide a great deal of useful insight into the human condition.  As they say at Alcoholics Anonymous, recognition is the first step.  The more we can understand our clumsy nature, the more we can do something about it.

The final chapter of the book is entitled “True Wisdom”.  In that chapter, the author provides a list of practical suggestions for dealing with our mental imperfections.

Some of these suggestions entail changes in our education processes.  For example, I was intrigued by the description of Harry Stottlemeier’s Discovery – a book intended to help teach children skills in critical thinking:

The eponymous Harry is asked to write an essay called “The most interesting thing in the world”.  Harry, a boy after my own heart, choosing to write his on thinking.  “To me, the most interesting thing in the whole world is thinking…”

Kids of ages 10-12 who were exposed to a version of this curriculum for 16 months, for just an hour a week, showed significant gains in verbal intelligence, nonverbal intelligence, self-confidence, and independence.

The core of the final chapter is a list of 13 pieces of individual-level advice, for how we can all “do better as thinkers”, despite the kluges in our design.  Each suggestion is founded (the author says) on careful empirical research:

  1. Whenever possible, consider alternative hypotheses
  2. Reframe the question
  3. Always remember that correlation does not entail causation
  4. Never forget the size of your sample
  5. Anticipate your own impulsivity and pre-commit
  6. Don’t just set goals.  Make contingency plans
  7. Whenever possible, don’t make important decisions when you are tired or have other things on your mind
  8. Always weigh benefits against costs
  9. Imagine that your decisions may be spot-checked
  10. Distance yourself
  11. Beware the vivid, the personal, and the anecdotal
  12. Pick your spots
  13. Remind yourself frequently of the need to be rational.

You’ll need to read the book itself for further details (often thought-provoking) about each of these suggestions.

A different kind of suggestion that we can augment our own mental processes, imperfect though they are, with electronic mental processes that are much more reliable.  The book touches on that idea in places too, mentioning the author’s reliance on the memory powers of his Palm Pilot and the contacts application on a mobile phone.  I think there’s lots more to come, along similar lines.

Blog at WordPress.com.