dw2

29 August 2010

Understanding humans better by understanding evolution better

Filed under: collaboration, deception, evolution, RSA, UKH+ — David Wood @ 5:54 am

Many aspects of human life that at first seem weird and hard to explain can make a lot more sense once you see them from the viewpoint of evolution.

It was Richard Dawkins’ book “The Selfish Gene” which first led me to that conclusion, whilst I was still at university.  After “The Selfish Gene”, I read “Sociobiology: the new synthesis“, by E.O. Wilson, which gave other examples.  I realised it was no longer necessary to refer to concepts such as “innate wickedness” or “original sin” to explain why people often did daft things.  Instead, people do things because (in part) of underlying behavioural patterns which tended to make their ancestors more likely to leave successful offspring.

In short, you can deepen your understanding of  humans if you understand evolution.  On the whole, attempts to get humans to change their behaviour will be more likely to succeed if they are grounded in an understanding of the real factors that led humans to tend to behave as they do.

What’s more, you can understand humans better if you understand evolution better.

In a moment, I’ll come to some interesting new ideas about the role played by technology in evolution.  But first, I’ll mention two other ways in which an improved understanding of evolution sheds richer light on the human condition.

1. Evolution often results in sub-optimal solutions

In places where an intelligent (e.g. human) designer would “go back to the drawing board” and introduce a new design template, biological evolution has been constrained to keep working with the materials that are already in play.  Biological evolution lacks true foresight, and cannot do what human designers would call “re-factoring an existing design”.

I’ve written on this subject before, in my review “The human mind as a flawed creation of nature” of the book by Gary Marcus, “Kluge – the haphazard construction of the human mind” – so I won’t say much more about that particular topic right now.  But I can’t resist including a link to a fascinating video in which Richard Dawkins demonstrates the absurdly non-optimal route taken by the laryngeal nerve of the giraffe.  As Dawkins says in the video, this nerve “is a beautiful example of historical legacy, as opposed to design”.  If you haven’t seen this clip before, it’s well worth watching, and thinking about the implications.

2. Evolution can operate at multiple levels

For a full understanding of evolution, you have to realise it can operate at multiple levels:

  • At the level of individual genes
  • At the level of individual organisms
  • At the level of groups of cooperating organisms.

At each level, there are behaviours which exist because they made it more likely for an entity (at that level) to leave descendants.  For example, groups of animals tend to survive as a group, if individuals within that group are willing, from time to time, to sacrifice themselves for the sake of the group.

The notion of group selection is, however, controversial among evolutionary theorists.  Part of the merit of books such as The Selfish Gene was that it showed how altruistic behaviour could be explained, in at least some circumstances, by looking at the point of survival of individual genes.  If individual A sacrifices himself for the sake of individuals B and C within the same group, it may well be that B and C carry many of the same genes as individual A.  This analysis seems to deal with the major theoretical obstacle to the idea of group selection, which is as follows:

  • If individuals A1, A2, A3,… all have an instinct to sacrifice themselves for the sake of their wider group, it may well mean, other things being equal, that this group is initially more resilient than competing groups
  • However, an individual A4 who is individually selfish, within that group, will get the benefit of the success of the group, and the benefit of individual survival
  • So, over time, the group will tend to contain more individuals like the “free-rider” A4, and fewer like A1, A2, and A3
  • Therefore the group will degenerate into selfish behaviour … and this shows that the notion of “group selection” is flawed.

Nevertheless, I’ve been persuaded by writer David Sloan Wilson that the notion of group selection can still apply.  He gives an easy-to-read account of his ideas in his wide-ranging book “Evolution for Everyone: How Darwin’s Theory Can Change the Way We Think About Our Lives“.  In summary:

  • Group selection can apply, provided the group also has mechanisms to reduce free-riding behaviour by individuals
  • For example, people in the group might have strong instincts to condemn and punish people who try to take excess advantage of the generosity of others
  • So long as these mechanisms keep the prevalence of free-riding below a certain threshold, a group can reach a stable situation in which the altruism of the majority continues to benefit the group as a whole.

(To be clear: this kind of altruism generally looks favourably only at others within the same group.  People who are outside your group won’t benefit from it.  An injunction such as “love your neighbour as yourself” applied in practice only to people within your group – not to people outside it.)

To my mind, this makes sense of a great deal of the mental gymnastics that we can observe: people combine elements of surreptitiously trying to benefit themselves (and their own families) whilst seeking to appearing to the group as a whole as being “good citizens”.  In turn, we are adept at seeing duplicity and hypocrisy in others.  There’s been a long “arms race” in which brains have been selected that are better at playing both sides of this game.

Incidentally, for another book that takes an entertaining and audacious “big picture” view of evolution and group selection, see the barn-storming “The Lucifer Principle: A Scientific Expedition into the Forces of History” by Howard Bloom.

3. The role of technology in evolution

At first sight, technology has little to do with evolution.  Evolution occurred in bygone times, whilst technology is a modern development – right?

Not true. First, evolution is very much a present-day phenomenon (as well as something that has been at work throughout the whole history of life).  Diseases evolve rapidly, under pressures of different regimes of anti-bacterial cocktails.  And there is evidence that biological evolution still occurs for humans.  A 2009 article in Time magazine was entitled “Darwin Lives! Modern Humans Are Still Evolving“.  Here’s a brief extract:

One study, published in PNAS in 2007 and led by John Hawks, an anthropologist at the University of Wisconsin at Madison, found that some 1,800 human gene variations had become widespread in recent generations because of their modern-day evolutionary benefits. Among those genetic changes, discovered by examining more than 3 million DNA variants in 269 individuals: mutations that allow people to digest milk or resist malaria and others that govern brain development.

Second, technology is itself an ancient phenomenon – including creative use of sticks and stones.  Benefits of very early human use of sticks and stones included fire, weapons, and clothing.  What’s more, the advantages of use of tools allowed a strange side-effect in human genetic evolution: as we became technologically stronger, we also became biologically weaker.  The Time magazine article mentioned above goes on to state the following:

According to anthropologist Peter McAllister, author of “Manthropology: the Science of Inadequate Modern Man“, the contemporary male has evolved, at least physically, into “the sorriest cohort of masculine Homo sapiens to ever walk the planet.” Thanks to genetic differences, an average Neanderthal woman, McAllister notes, could have whupped Arnold Schwarzenegger at his muscular peak in an arm-wrestling match. And prehistoric Australian Aborigines, who typically built up great strength in their joints and muscles through childhood and adolescence, could have easily beat Usain Bolt in a 100-m dash.

Timothy Taylor, Reader in Archaeology at the University of Bradford and editor-in-chief of the Journal of World Prehistory, tackles this same topic in his recent book “The Artificial Ape: How Technology Changed the Course of Human Evolution“.

Amazon.com describes this book as following:

A breakthrough theory that tools and technology are the real drivers of human evolution.

Although humans are one of the great apes, along with chimpanzees, gorillas, and orangutans, we are remarkably different from them. Unlike our cousins who subsist on raw food, spend their days and nights outdoors, and wear a thick coat of hair, humans are entirely dependent on artificial things, such as clothing, shelter, and the use of tools, and would die in nature without them. Yet, despite our status as the weakest ape, we are the masters of this planet. Given these inherent deficits, how did humans come out on top?

In this fascinating new account of our origins, leading archaeologist Timothy Taylor proposes a new way of thinking about human evolution through our relationship with objects. Drawing on the latest fossil evidence, Taylor argues that at each step of our species’ development, humans made choices that caused us to assume greater control of our evolution. Our appropriation of objects allowed us to walk upright, lose our body hair, and grow significantly larger brains. As we push the frontiers of scientific technology, creating prosthetics, intelligent implants, and artificially modified genes, we continue a process that started in the prehistoric past, when we first began to extend our powers through objects.

Weaving together lively discussions of major discoveries of human skeletons and artifacts with a reexamination of Darwin’s theory of evolution, Taylor takes us on an exciting and challenging journey that begins to answer the fundamental question about our existence: what makes humans unique, and what does that mean for our future?

In an interview in the New Scientist, Timothy Taylor gives more details of his ideas:

Upright female hominins walking the savannah had a real problem: their babies couldn’t cling to them the way a chimp baby could cling to its mother. Carrying an infant would have been the highest drain on energy for a hominin female – higher than lactation. So what did they do? I believe they figured out how to carry their newborns using a loop of animal tissue. Evidence of the slings hasn’t survived, but in the same way that we infer lungs and organs from the bones of fossils that survive, it is from the stone tools that we can infer the bits that don’t last: things made from sinew, wood, leather and grasses…

Once you have slings to carry babies, you have broken a glass ceiling – it doesn’t matter whether the infant is helpless for a day, a month or a year. You can have ever more helpless young and that, as far as I can see, is how encephalisation took place in the genus Homo. We used technology to turn ourselves into kangaroos. Our children are born more and more underdeveloped because they can continue to develop outside the womb – they become an extra-uterine fetus in the sling. This means their heads can continue to grow after birth, solving the smart biped paradox. In that sense technology comes before the ascent to Homo. Our brain expansion only really took off half a million years after the first stone tools. And they continued to develop within an increasingly technological environment…

I’ve ordered Taylor’s book from Amazon and I expect it to be waiting for me at my home in the UK once I return from my current trip in Asia.  I’m also looking forward to hosting a discussion meeting on Saturday 11th Sept under the auspices of Humanity+ UK in London, where Timothy Taylor himself will be the main speaker. People on Facebook can register their interest in this meeting by RSVPing here.  There’s no charge to attend.

Another option to see Timothy Taylor lecture in person – for those able to spare time in the middle of the day on a Thursday (9th Sept) – will be at the RSA.  I expect there will be good discussion at both events, but the session at H+UK is longer (two hours, as opposed to just one at the RSA), and I expect more questions there about matters such as the likely role of technology radically re-shaping the future development of humans.

Footnote: of course, the fact that evolution guided our ancestors to behave in certain ways is no reason for us to want to continue to behave in these ways.  But understanding the former is, in my view, very useful background knowledge for being to devise practical measures to change ourselves.

27 August 2010

Reconsidering recruitment

Filed under: Accenture, Psion, recruitment, Symbian — David Wood @ 5:12 am

The team at ITjoblog (‘the blog for IT professionals’) recently asked me to write a guest column for them.  It has just appeared: “Reconsidering recruitment“.

With a few slight edits, here’s what I had to say…

Earlier in my career, I was involved in lots of recruitment.  The software team inside Psion followed a steep headcount trajectory through the process of transforming into Symbian, and continued to grow sharply in subsequent years as many new technology areas were added to the scope of Symbian OS.  As one of the senior software managers in the company throughout this period, I found myself time and again in interviewing and recruitment situations.  I was happy to give significant amounts of my time to these tasks, since I knew what a big impact good (or bad) recruitment can make to organisational dynamics.

In recent weeks, I’ve once again found myself in a situation where considerable headcount growth is expected.  I’m working on a project at Accenture, assisting their Embedded Mobility Services group.  Mobile is increasingly a hot topic, and there’s strong demand for people providing expert consuItancy in a variety of mobile development project settings. This experience has led me to review my beliefs about the best way to carry out recruitment in such situations.  Permit me to think aloud…

To start with, I remain a huge fan of graduate recruitment programs.  The best graduates bring fire in their bellies: a “we can transform the world” attitude that doesn’t know what’s meant to be impossible – and often carries it out!  Of course, graduates typically take some time before they can be deployed in the frontline of commercial software development.  But if you plan ahead, and have effective “bootcamp” courses, you’ll have new life in your teams soon enough.  There will be up-and-coming stars ready to step into the shoes left by any unexpected staff departures or transfers.  If you can hire a group of graduates at the same time, so much the better.  They can club together and help each other, sharing and magnifying what they each individually learn from their assigned managers and mentors.  That’s the beauty of the network effect.

That’s just one examples of the importance of networks in hiring.  I place a big value on having prior knowledge of someone who is joining your team.  Rather than having to trust your judgement during a brief interviewing process, and whatever you can distill from references, you can rely on actual experience of what someone is like to work with.  This effect becomes more powerful when several of your current workforce can attest to the qualities of a would-be recruit, based on all having worked together at a previous company in the past.  I saw Symbian benefit from this effect via networks of former Nortel employees who all knew each other and who could vouch for each others’ capabilities during the recruitment process.  Symbian also had internal networks of former high-calibre people from SCO, and from Ericsson, among other companies.  The benefit here isn’t just that you know that someone is a great professional.  It’s that you already know what their particular special strengths are.  (“I recommend that you give this task to Mike.  At our last company, he did a fantastic job of a similar task.”)

Next, I recommend hiring for flexibility, rather than simply trying to fit a current task description.  I like to see evidence of people coping with ambiguity, and delivering good results in more than one kind of setting.  That’s because projects almost always change; likewise for organisational structures.  So while interviewing, I’m not trying to assess if the person I’m interviewing is the world expert in, say, C++ templates.  Instead, I’m looking for evidence that they could turn their hand to mastering whole new skill areas – including areas that we haven’t yet realised will be important to future projects.

Similarly, rather than just looking for rational intelligence skills, I want to see evidence that someone can fit well into teams.  “Soft skills”, such as inter-personal communication and grounded optimism, aren’t just an optional extra, even for roles with intense analytic content.  The best learning and the best performance comes from … networks (to use that word again) – but you can’t build high-functioning networks if your employees lack soft skills.

Finally, high-performing teams that address challenging problems benefit from internal variation.  So don’t just look for near-clones of people who already work for you.  When scanning CVs, keep an eye open for markers of uniqueness and individuality.  At interview, these markers provide good topics to explore – where you can find out something of the underlying character of the candidate.

Inevitably, you’ll sometimes make mistakes with recruitment, despite taking lots of care in the process.  To my mind, that’s OK.  In fact, it’s better to take a few risks, since you can find some excellent new employees in the process.  But you need to have in place a probation period, during which you pay close attention to how your hires are working out.  If a risky candidate turns out disappointing, even after some coaching and support, then you should act fast – for the sake of everyone concerned.

In summary, I see recruitment and induction as a task that deserves high focus from some of the most skilled and perceptive members of your existing workforce.  Skimp on these tasks and your organisation will suffer – sooner or later.  Invest well in these tasks, and you should see the calibre of your workforce steadily grow.

For further discussion, let me admit that rules tend to have limits and exceptions.  You might find it useful to identify limits and counter-examples to the rules of thumb I’ve outlined above!

15 August 2010

Co-existing with Android

Filed under: Android, Nokia — David Wood @ 9:51 pm

For some time, I’ve wanted to learn more about Android.

Not just theory, or second hand knowledge.  I wanted my own, direct, practical knowledge – obtained by using an Android device “in anger” (as the saying goes).

Playing with a device for a few minutes – for example, at a trade show – fails to convey many of the real-world strengths and weaknesses of that device.  But it’s the real-world strengths and weaknesses that I want to experience.

It’s important for me for work reasonsAccenture Embedded Mobility Services are involved in a stream of different Android projects.  (Among other things, I want to be able to install and use various experimental Android apps that some of my colleagues have been writing.)

It’s also important for me for personal productivity reasons.  If an Android phone turns out to be a smarter phone than any I’ve been using so far, I want to know about it – so I can use it more often, and become a smarter person as a result.

But there are sooo many Android devices.  Carphone Warehouse had a large selection to choose between.  For a while, I struggled to decide which one to pick.

In the end, I chose a Nexus One.  That’s because it is the device most likely to be quickly updated to whatever the latest version of Android is.  (Other Android phones include customisation layers from device manufacturers, which seem to need to be re-done – and painstakingly re-tested – whenever there’s a new Android version.  Unsurprisingly, that introduces a delay.)

For help with a Nexus One, I owe a big debt of gratitude to Kenton Price of Little Fluffy Toys Ltd.  I first met Kenton at a recent meeting of the London GTUG (Google Technology Users Group), where we both listened to Google’s Wesley Chun give an upbeat, interesting talk about Google App Engine.  Later that evening, we got talking.  A few days afterwards, Little Fluffy Toys became famous, on account of widespread publicity for their very timely London Cycle Hire Widget.  Kenton & I exchanged a few more emails, and the outcome was that we met in a coffee shop next to the Accenture building in Old Bailey.  Kenton kindly leant me a Nexus One for a few weeks, for me to find out how I get on with it.  Just as important, Kenton quickly showed me a whole raft of fascinating things that the device could do.

But then I got cold feet.  Did I really want to stop using the Nokia E72, which has been my “third brain” for the best part of a year? My fingers have learned all kinds of quick, useful methods for me to get the best out of this device.  (Many of these methods are descendents of usage patterns from even earlier devices in the same general family, including the E71 and the E61i.)  I also heard from many people that the battery life on the Nexus One was poor.  What’s more, during the very first proper phone call I did with this phone, the person at the other end told me several times “you’re breaking up – I can’t hear you”.  (Of course, a sample size of one proves nothing.)

It was the transfer of all my “phonebook contacts” from the E72, to be merged (apparently) with my email contacts on the Nexus One, that gave me even more reason to hesitate.  I wasn’t sure I was ready for that kind of potential restructuring of my personal data.

So I’ve compromised.  I already have two SIMs.  One lives in my E72, and the other usually sits inside my laptop.  Well, I’ve taken the SIM from the laptop and put it into the Nexus One.  For the time being, I’ll keep using the E72 for phone calls and text messages.  And probably for lots more too.  But I’ll use the Nexus One for lots of interesting experiments.  (Like showing friends and family members Google Goggles…).

I expect this usage pattern will change over the weeks ahead.  Let’s see how things evolve!

Earlier this evening, I used my E72 to take the following picture of the Nexus One perched next to my “second brain” – my Psion Series 5mx.  Hmm, that Nexus One battery indicator does look worryingly low.  (Maybe I should turn down the screen brightness…)

Seeing probabilities

Filed under: aging, risks, Ultralase — David Wood @ 12:59 am

I thought of entitling this blogpost “Blinded by technology”.  Or, perhaps, “Almost blinded by technology”.  But that would have been unfair.

It’s now just over five weeks since I had my eyes lasered at the Ultralase clinic in Guildford, Surrey.  For more than 40 years, I had worn spectacles, to correct short sightedness.  My hope with the surgery was that I could dispense with spectacles and all the inconvenience that goes with them.

I had an idea what to expect.  Back in 2005, my wife had a similar operation, also from Ultralase, and has been very happy with the result.  I remember her being pleased with the outcome just a few moments after the operation, when, from the room next to the operating theatre, I could hear her excited voice on opening her eyes.  But my own experience turned out different.

One complicating factor is that I received a treatment called “monovision”, in which the two eyes are given treatments that optimise them for different viewing tasks.  My left eye was optimised for short-distance reading (such as computer screens, books, phone screens).  My right eye was optimised for medium-distance and long-distance.

The rationale for monovision is to address a decline in the power of eyes to change the distance where they’re focussing.  This is a condition called “Presbyopia” – sometimes known as “Aging eye”.  To quote from “The Eye Digest“:

A presbyopic eye loses its innate ability to clearly see all objects that are located at different distances. It can see some objects clearly but not all. In individuals who are less than 40 years of age, the eye can be thought of as an ‘auto-focus’  cameras. In an auto-focus camera, all one has to do to get sharp pictures is to point the camera in that direction, the auto-focus mechanism kicks in and you get sharp pictures. After age 40, the presbyopic eye can be thought of as a ‘fixed-focus’ camera. Fixed-focus cameras, the most basic of all cameras, have a nonadjustable lens. In general, a fixed-focus camera can take satisfactory photographs but it may produce a blurred picture if the subject is moving or is less than 6 feet (1.8 meters) away.

The presbyopic eye is also in a more or less ‘fixed-focus’ state. This means that a presbyopic eye will see clearly only at a particular distance. If you correct the presbyopic eye for distance with glasses or contact lenses, then it will clearly see all the distant objects and may read 20/20 on the distance vision eye chart, but there is no way it would be able to clearly read up-close with the distance vision correction. On the other hand if you correct the eye for reading up-close, then you will be able to read clearly, but there is no way you will be able to see distance objects clearly with the same correction. So reading vision is at the cost of distance vision and vice versa.

And as Wikipedia puts it:

Presbyopia is a health condition where the eye  exhibits a progressively diminished ability to focus on near objects with age. Presbyopia’s exact mechanisms are not known with certainty; the research evidence most strongly supports a loss of elasticity of the crystalline lens, although changes in the lens’s curvature from continual growth and loss of power of the ciliary muscles (the muscles that bend and straighten the lens) have also been postulated as its cause.

Similar to grey hair and wrinkles, presbyopia is a symptom caused by the natural course of aging. The first symptoms (described below) are usually first noticed between the ages of 40-50. The ability to focus on near objects declines throughout life, from an accommodation of about 20 dioptres (ability to focus at 50 mm away) in a child, to 10 dioptres at 25 (100 mm), and levels off at 0.5 to 1 dioptre at age 60 (ability to focus down to 1–2 meters only).

The word presbyopia comes from the Greek word presbys (πρέσβυς), meaning “old man” or “elder”, and the Neolatin suffix -opia, meaning “sightedness”.

I can’t deny it: by these measures, I’m aging!  I turned 51 in February.  And I have presbyopia to show for my age.   (Not to mention wrinkles…)

Monovision is one of the options offered to patients with presbyopia.  Not everyone copes well with monovision treatment.  Apparently, some people get headaches, from the two eyes having different preferred focal lengths.  For this reason, Ultralase gave me special spectacles to wear, as an experiment, for six weeks before the intended date of the operation.  These spectacles mimicked the intended outcome of the operation: left eye great for short-distance, right eye great for everything else.  Happily, I had no headache, and was pleased with how these spectacles worked for me.

So I approached the operation itself with high hopes.  And I can report that my left eye has turned out exactly as hoped.  Without glasses, my short-range sight is excellent.

But  my right eye has ended up in a less optimal state.  Subsequent tests by Ultralase, repeated on several occasions, confirm that my right eye is about -0.75 compared to what was intended.  When I look into the middle distance or long distance, without wearing glasses, I see things as much fuzzier than before (when I wore glasses).  To see things more clearly, I have to squint, or stand up and walk closer.  In practical terms, it causes inconvenience when I’m in meetings at work.  I can’t see what’s displayed on screens in conference rooms.  I sometime struggle to see the prices on the menus behind the counter at coffee shops.  And so on.

But to say that I have literally been “blinded by technology” (by the short blast of a laser) would be putting things much too strongly.  I can get by fine, most of the time.

Nor was I figuratively “blinded by technology” – in the sense of being naively over-optimistic about the outcome of a technical fix to address the symptoms of aging.  The Ultralase surgeon had carefully explained matters to me before the operation.  He even got me to fill in some blank paragraphs in a form, using my own words to confirm that I understood the risks associated with the surgery.  One blank paragraph was headed, “Four risks with the operation”.  Another was headed, “How will I cope, if the treatment doesn’t work as well as I hope”.  It was sobering.

I knew, before the operation, that there was a one-in-six chance that I would need a “top up” operation six months (or so) further down the line.  And that looks like what will happen to me.  The risks were significantly higher in my case than for most patients, because of the monovision treatment, and because my eyesight was starting from such a poor threshold (around -8.0).

Medical treatments frequently involve probabilities.  As with many other difficult decisions in life, it’s important to be able to understand probabilities, and to plan ahead for possible unwanted outcomes.

It’s still possible that my right eye will continue to improve by itself.  I read of cases where it took several months, after laser eye surgery, for an eye to completely settle down.  That’s why Ultralase require several months of stability in eyesight before doing any follow-up surgery.  My current guess is that I’ll be visiting the surgery again some time around January.  In the meantime, I’m putting up with some haziness in my middle-distance and long-distance vision.

Has this experience changed my attitude towards the wonder-powers of technology (for example, to address the problems of aging)?

Not really.  I already know, viscerally, from my many years in the hi-tech smartphone industry, that technical solutions frequently fail.  A team can have many thoughtful, experienced, super-smart people, developing new technology in a careful way, but still the results can go wrong.  You can take measures to try to reduce risks, but you can’t make all the risks go away.  And, in many cases,  you shouldn’t seek to make all the risks go away.  That way, you’d miss out many benefits from when risky projects turn out good.  But you should be aware of the risks beforehand, and try to quantify them.

For me, a one in six chance of needing the inconvenience of a second operation was a risk well worth taking.  And I still see things that way.

Blog at WordPress.com.