dw2

16 October 2011

Human regeneration – limbs and more

Filed under: healthcare, medicine, rejuveneering, risks, Singularity — David Wood @ 1:57 am

Out of the many interesting presentations on Day One of the 2011 Singularity Summit here in New York, the one that left me with the most to think about was “Regenerative Medicine: Possibilities and Potential” by Dr. Stephen Badylak.

Dr Badylak is deputy director of the McGowan Institute for Regenerative Medicine, and a Professor in the Department of Surgery at the University of Pittsburg. In his talk at the Singularity Summit, he described some remarkable ways in which the human body could heal itself – provided we provide it with suitable “scaffolding” that triggers the healing.

One of the examples Dr Badylak discussed is also covered in a recent article in Discover Magazine, How Pig Guts Became the Next Bright Hope for Regenerating Human Limbs.  The article deserves reading all the way through. Here are some short extracts from the beginning:

When he first arrived in the trauma unit of San Antonio’s Brooke Army Medical Center in December 2004, Corporal Isaias Hernandez’s leg looked to him like something from KFC. “You know, like when you take a bite out of the drumstick down to the bone?” Hernandez recalls. The 19-year-old Marine, deployed in Iraq, had been trying to outfit his convoy truck with a makeshift entertainment system for a long road trip when the bomb exploded. The 12-inch TV he was clutching to his chest shielded his vital organs; his buddy carrying the DVDs wasn’t so lucky.

The doctors kept telling Hernandez he would be better off with an amputation. He would have more mobility with a prosthetic, less pain. When he refused, they took a piece of muscle from his back and sewed it into the hole in his thigh. He did all he could to make it work. He grunted and sweated his way through the agony of physical therapy with the same red-faced determination that got him through boot camp. He even sneaked out to the stairwell, something they said his body couldn’t handle, and dragged himself up the steps until his leg seized up and he collapsed.

Generally people never recovered from wounds like his. Flying debris had ripped off nearly 70 percent of Hernandez’s right thigh muscle, and he had lost half his leg strength. Remove enough of any muscle and you might as well lose the whole limb, the chances of regeneration are so remote. The body kicks into survival mode, pastes the wound over with scar tissue, and leaves you to limp along for life….

Hernandez recalled that one of his own doctors—Steven Wolf, then chief clinical researcher for the United States Army Institute of Surgical Research in Texas—had once mentioned some kind of experimental treatment that could “fertilize” a wound and help it heal. At the time, Hernandez had dismissed the therapy as too extreme. The muscle transplant sounded safer, easier. Now he changed his mind. He wanted his leg back, even if it meant signing himself up as a guinea pig for the U.S. Army.

So Hernandez tracked down Wolf, and in February 2008 the two got started. First, Wolf put Hernandez through another grueling course of physical therapy to make sure he had indeed pushed any new muscle growth to the limit. Then he cut open Hernandez’s thigh and inserted a paper-thin slice of the same material used to make the pixie dust: part of a pig’s bladder known as the extracellular matrix, or ECM, a fibrous substance that occupies the spaces between cells. Once thought to be a simple cellular shock absorber, ECM is now understood to contain powerful proteins that can reawaken the body’s latent ability to regenerate tissue.

A few months after the surgery healed, Wolf assigned the young soldier another course of punishing physical therapy. Soon something remarkable began to happen. Muscle that most scientists would describe as gone forever began to grow back. Hernandez’s muscle strength increased by 30 percent from what it was before the surgery, and then by 40 percent. It hit 80 percent after six months. Today it is at 103 percent—as strong as his other leg. Hernandez can do things that were impossible before, like ease gently into a chair instead of dropping into it, or kneel down, ride a bike, and climb stairs without collapsing, all without pain

The challenge now is replicating Hernandez’s success in other patients. The U.S. Department of Defense, which received a congressional windfall of $80 million to research regenerative medicine in 2008, is funding a team of scientists based at the University of Pittsburgh’s McGowan Institute for Regenerative Medicine to oversee an 80-patient study of ECM at five institutions. The scientists will attempt to use the material to regenerate the muscle of patients who have lost at least 40 percent of a particular muscle group, an amount so devastating to limb function that it often leads doctors to perform an amputation.

If the trials are successful, they could fundamentally change the way we treat patients with catastrophic limb injuries. Indeed, the treatment might someday allow patients to regrow missing or mangled body parts. With an estimated 1.7 million people in the United States alone missing limbs, promoters of regenerative medicine eagerly await the day when therapies like ECM work well enough to put the prosthetics industry out of business.

The interesting science is the explanation of the role of the ECM – the extracellular matrix, which provides the scaffolding that allows the healing to take place. The healing turns out to involve the body directing stem cells to the scaffolding. These stem cells then differentiate into muscle cells, nerve cells, blood cells, and so on. There’s also some interesting science to explain why the body doesn’t reject the ECM that’s inserted into it.

Badylak speaks with confidence of the treatment one day allowing the regeneration of damaged human limbs, akin to what happens with salamanders.  He also anticipates the healing of brain tissue damaged by strokes.

Later that morning, another speaker at the Singularity Summit, Michael Shermer, referred to Dr Badylak’s presentation. Shermer is a well-known sceptic – indeed, he’s the publisher of Skeptic magazine.  Shermer often participates in public debates with believers in various religions and new-age causes.  Shermer mentioned that, at these debates, his scientific open mindedness is sometimes challenged.  “OK, if you are open-minded, as you claim, what evidence would make you believe in God?”  Shermer typically gives the answer that, if someone with an amputated limb were to have that limb regrow, that would be reason for him to become a believer:

Most religious claims are testable, such as prayer positively influencing healing. In this case, controlled experiments to date show no difference between prayed-for and not-prayed-for patients. And beyond such controlled research, why does God only seem to heal illnesses that often go away on their own? What would compel me to believe would be something unequivocal, such as if an amputee grew a new limb. Amphibians can do it. Surely an omnipotent deity could do it. Many Iraqi War vets eagerly await divine action.

However, Shermer joked with the Singularity Summit audience, it now appears that Dr Badylak might be God.  The audience laughed.

But there’s a serious point at stake here. The Singularity Summit is full of talks about humans being on the point of gaining powers that, in previous ages, would have been viewed as Divine. With great power comes great responsibility. As veteran ecologist and environmentalist Stewart Brand wrote at the very start of his recent book “Whole Earth Discipline“,

We are as gods and HAVE to get good at it.

In the final talk of the day, cosmologist Professor Max Tegmark addressed the same theme.  He gave an estimate of “between 1/10 and 1/10,000” for the probability of human extinction during any decade in the near-term future – extinction arising from (for example) biochemical warfare, runaway global warming, nanotech pollution, or a bad super-intelligence singularity. In contrast, he said, only a tiny fraction of the global GDP is devoted to management of existential risks.  That kind of “lack of paying attention” meant that humanity deserved, in Tegmark’s view, a “mid-term rating” of just D-.  Our focus, far too much of the time, is on the next election cycle, or the next quarterly financial results, or other short term questions.

One person who is seeking to encourage greater attention to be paid to existential risks is co-founder of Skype, Jaan Tallinn (who earlier in the year gave a very fine talk at a Humanity+ event I organised in London).  Jaan’s main presentation at the 2011 Singularity Summit will be on Day Two, but he briefly popped up on stage on Day One to announce a significant new fundraising commitment: he will personally match any donations made over the weekend to the Singularity Institute, up to a total of $100,000.

With the right resources, wisely deployed, we ought to see collective human intelligence achieve lots more regeneration – not just of broken limbs, but also of troubled societies and frustrated lives – whilst at the same time steering humanity away from the existential risks latent in these super-powerful technologies.  The discussion will continue tomorrow.

2 October 2011

Prioritising the best peer pressure

Filed under: BHAG, catalysts, collaboration, futurist, Humanity Plus — David Wood @ 9:36 am

In a world awash with conflicting influences and numerous potential interesting distractions, how best to keep “first things first“?

A big part of the answer is to ensure that the influences we are closest to us are influences:

  • Whose goals are aligned with our own
  • Who can give us prompt, helpful feedback when we are falling short of our own declared intentions
  • Who can provide us with independent viewpoints that enrich, complement, and challenge our current understanding.

In my own case, that’s the reason why I have been drawn to the community known as “Humanity+“:

Humanity+ is an international nonprofit membership organization which advocates the ethical use of technology to expand human capacities. We support the development of and access to new technologies that enable everyone to enjoy better minds, better bodies and better lives. In other words, we want people to be better than well.

I deeply share the goals of Humanity+, and I find some of the world’s most interesting thinkers within that community.

It’s also the reason I have sought to aid the flourishing of the Humanity+ community, particularly in the UK, by organising a series of speaker meetings in London.  The speakers at these meetings are generally fascinating, but its the extended networking that follows (offline and online) which provides the greatest value.

My work life has been very busy in the last few months, leaving me less time to organise regular H+UK meetings.  However, to keep myself grounded in a community that contains many people who can teach me a great deal – a community that can provide powerful positive peer pressure – I’ve worked with some H+UK colleagues to pull together an all day meeting that is taking place at the Saturday at the end of this week (8th October).

The theme of this meeting is “Beyond Human: Rethinking the Technological Extension of the Human Condition“.  It splits into three parts:

  • Beyond human: The science and engineering
  • Beyond human: Implications and controversies
  • Beyond human: Getting involved

The event is free to attend.  There’s no need to register in advance. The meeting is taking place in lecture room B34 in the Malet Street building (the main building) of Birkbeck College.  This is located in Torrington Square (which is a pedestrian-only square), London WC1E 7HX.

Full details are on the official event website.  In this blogpost, to give a flavour of what will be covered, I’ll just list the agenda with the speakers and panellists.

09.30 – Finding the room, networking
Opening remarks
Beyond human: The science and engineering
11.40 – Audience Q&A with the panel consisting of the above four speakers
Lunch break
12.00 – People make their own arrangements for lunch (there are some suggestions on the event website)
Beyond human: Implications and controversies
14.40 – Audience Q&A with the panel consisting of the above four speakers
Extended DIY coffee break
15.00 – Also a chance for extended networking
Beyond human: Getting involved
17.25 – Audience Q&A with the panel consisting of the above four speakers
End of conference
17.45 – Hard stop – the room needs to be empty by 18.00

You can follow the links to find out more information about each speaker. You’ll see that several are eminent university professors. Several have written key articles or books on the theme of technology that significantly enhances human potential. Some complement their technology savvy with an interest in performance art.  All are distinguished and interesting futurists in their own way.

I don’t expect I’ll agree with everything that’s said, but I do expect that great personal links will be made – and strengthened – during the course of the day.  I also expect that some of the ideas shared at the conference – some of the big, hairy, audacious goals unveiled – will take on a major life of their own, travelling around the world, offline and online, catalysing very significant positive change.

29 July 2011

Towards a mind-stretching weekend in New York

Filed under: AGI, futurist, leadership, nanotechnology, robots, Singularity — David Wood @ 9:19 pm

I’ve attended the annual Singularity Summit twice before – in 2008 and in 2009.  I’ve just registered to attend the 2011 event, which is taking place in New York on 15th-16th October.  Here’s why.

On both previous occasions, the summits featured presentations that gave me a great deal to think about, on arguably some of the most significant topics in human history.  These topics include the potential emergence, within the lifetimes of many people alive today, of:

  • Artificial intelligence which far exceeds the capabilities of even the smartest group of humans
  • Robots which far exceed the dexterity, balance, speed, strength, and sensory powers of even the best human athletes, sportspeople, or soldiers
  • Super-small nanobots which can enter the human body and effect far more thorough repairs and enhancements – to both body and mind – than even the best current medical techniques.

True, at the previous events, there were some poor presentations too – which is probably inevitable given the risky cutting-edge nature of the topics being covered.  But the better presentations far outweighed the worse ones.

And as well as the presentations, I greatly enjoyed the networking with the unusual mix of attendees – people who had taken the time to explore many of the fascinating hinterlands of modern technology trends.  If someone is open-minded enough to give serious thought to the ideas listed above, they’re often open-minded enough to entertain lots of other unconventional ideas too.  I frequently found myself in disagreement with these attendees, but the debate was deeply refreshing.

Take a look at the list of confirmed speakers so far: which of these people would you most like to bounce ideas off?

The summit registration page is now open.  As I type these words, that page states that the cost of tickets is going to increase after 31 July.  That’s an argument for registering sooner rather than later.

To provide more information, here’s a copy of the press release for the event:

Singularity Summit 2011 in New York City to Explore Watson Victory in Jeopardy

New York, NY This October 15-16th in New York City, a TED-style conference gathering innovators from science, industry, and the public will discuss IBM’s ‘Watson’ computer and other exciting developments in emerging technologies. Keynote speakers at Singularity Summit 2011 include Jeopardy! champion Ken Jennings and famed futurist and inventor Ray Kurzweil. After losing to an IBM computer in Jeopardy!, Jennings wrote, “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines. ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”

In February, Watson defeated two human champions in Jeopardy!, the game show famous for its mind-bending trivia questions. Surprising millions of TV viewers, Watson took down champions Ken Jennings and Brad Rutter for the $1 million first prize. Facing defeat on the final show, competitor Ken Jennings jokingly wrote in parentheses on his last answer: “I for one welcome our new computer overlords.” Besides Watson, the Singularity Summit 2011 will feature speakers on robotics, nanotechnology, biotechnology, futurism, and other cutting-edge technologies, and is the only conference to focus on the technological Singularity.

Responding to Watson’s victory, leading computer scientist Ray Kurzweil said, “Watson is a stunning example of the growing ability of computers to successfully invade this supposedly unique attribute of human intelligence.” In Kurzweil’s view, the combination of language understanding and pattern recognition that Watson displays would make its descendants “far superior to a human”. Kurzweil is known for predicting computers whose conversations will be indistinguishable from people by 2029.

Beyond artificial intelligence, the Singularity Summit will also focus on high-tech and where it is going. Economist Tyler Cowen will examine the economic impacts of emerging technologies. Cowen argued in his recent book The Great Stagnation that modern society is on a technological plateau where “a lot of our major innovations are springing up in sectors where a lot of work is done by machines, not by human beings.” Tech entrepreneur and investor Peter Thiel, who sits on the board of directors of Facebook, will share his thoughts on innovation and jumpstarting the economy.

Other speakers include MIT cosmologist Max Tegmark, Allen Brain Institute chief scientist Christof Koch, co-founder of Skype Jaan Tallinn, robotics professors James McLurkin and Robin Murphy, Bionic Builders host Casey Pieretti, the MIT Media Lab’s Riley Crane, MIT polymath Alexander Wissner-Gross, filmmaker and television personality Jason Silva, and Singularity Institute artificial intelligence researcher Eliezer Yudkowsky.

27 July 2011

Eclectic guidance for big life choices

Filed under: books, challenge, Economics, evolution, leadership, market failure, psychology, risks, strategy — David Wood @ 10:34 pm

“If you’re too busy to write your normal blog posts, at least tell us what books you’ve liked reading recently.”

That’s a request I’ve heard in several forms over the last month or so, as I’ve been travelling widely on work-related assignments.  On these travels, I’ve met several people who were kind enough to mention that they enjoyed reading my blog posts – especially those postings recommending books to read.

In response to this suggestion, let me highlight four excellent books that I’ve read recently, which have each struck me as having something profound to say on the Big Topic of how to make major life choices.

Adapt: Why Success Always Starts with Failure, by Tim Harford

Adapt: Why Success Always Starts with Failure draws out all sorts of surprising “aha!” connections between different areas of life, work, and society.  The analysis ranges across the wars in Iraq, the comparative strengths and weaknesses of Soviet-style centrally planned economies, the unorthodox way the development of the Spitfire fighter airplane was funded, the “Innovator’s Dilemma” whereby one-time successful companies are often blindsided by emerging new technologies, different approaches to measuring the effectiveness of charitable aid donations, the risk of inadvertently encouraging perverse behaviours when setting grand over-riding incentives, the over-bearing complexity of modern technology, the causes of the great financial crash of 2008-2009, reasons why safety systems break down, approaches to tackling climate change, and the judicious use of prizes to encourage successful breakthrough innovation.  Yes, this is a real intellectual roller-coaster, with some unexpected twists along the way – revelations that had me mouthing “wow, wow” under my breath.

And as well as heroes, there are villains.  (Donald Rumsfeld comes out particularly badly in these pages – even though he’s clearly in some ways a very bright person.  That’s an awful warning to the others among us who rejoice in above-average IQs.)

The author, Tim Harford, is an economist, but this book is grounded in observations about Darwinian evolution.  Three pieces of advice pervade the analysis – advice that Harford dubs “Palchinsky Principles”, in honour of Peter Palchinsky, a Russian mining engineer who was incarcerated and executed by Stalin’s government in 1929 after many years of dissent against the human cost of the Soviet top-down command and control approach to industrialisation.  These principles are designed to encourage stronger innovation, better leadership, and more effective policies, in the face of complexity and unknowns.  The principles can be summarised as follows:

  1. Variation – seek out new ideas and try new ideas
  2. Survivability – when trying something new, do it on a scale where failure is survivable
  3. Selection – seek out feedback and learn from mistakes as you go along, avoiding an instinctive reaction of denial.

Harford illustrates these principles again and again, in the context of the weighty topics already listed, including major personal life choices as well as choices for national economies and international relations.  The illustrations are full of eye-openers.  The book’s subtitle is a succinct summary: “success always stars with failure”.  The notion that it’s always possible to “get it right the first time” is a profound obstacle to surviving the major crises that lie ahead of us.  We all need a greater degree of openness to smart experimentation and unexpected feedback.

The Moral Landscape: How Science Can Determine Human Values, by Sam Harris

That thought provides a strong link to the second book I wish to mention: The Moral Landscape: How Science Can Determine Human Values.  It’s written by Sam Harris, who I first came to respect when I devoured his barnstorming The End of Faith: Religion, Terror, and the Future of Reason a few years ago.

In some ways, the newer book is even more audacious.  It considers how we might go about finding answers to big questions such as “how should I live?” and “what makes some ways of life more moral than others?”  As some specific examples, how should we respond to:

  • The Taliban’s insistence that the education of girls is an abomination?
  • The stance by Jehovah’s Witnesses against blood transfusion?
  • The prohibition by the Catholic Church of the use of condoms?
  • The legalisation of same-sex relationships?
  • The use of embryonic stem cells in the search for cures of diseases such as Alzheimer’s and Parkinson’s?
  • A would-be Islamist suicide bomber who is convinced that his intended actions will propel him into a paradise of abundant mental well-being?

One response is that such questions are the province of religion.  The correct answers are revealed via prophets and/or holy books.  The answers are already clear, to those with the eye of faith.  It is a divine being that tells us, directly or indirectly, the difference between good and evil.  There’s no need for experimental investigations here.

A second response is that the main field to study these questions is that of philosophy.  It is by reason, that we can determine the difference between good and evil.

But Sam Harris, instead, primarily advocates the use of the scientific method.  Science enters the equation because it is increasingly able to identify:

  • Neural correlates (or other physical or social underpinnings) of sentient well-being
  • Cause-and-effect mechanisms whereby particular actions typically bring about particular changes in these neural correlates.

With the help of steadily improving scientific understanding, we can compare different actions based on their likely effects on sentient well-being.  Actions which are likely to magnify sentient well-being are good, and those which are likely to diminish it are evil.  It’s no defense of an action that it makes sense within an archaic, pre-scientific view of the world – a view in which misfortunes are often caused by witches’ spells, angry demons, or spiteful disembodied minds.

Here, “science” means more than the findings of any one branch of science, whether that is physics, biology, psychology, or sociology.  Instead, it is the general disciplined outlook on life that seeks to determine objective facts and connections, and which is open to making hypotheses, gathering data in support of these hypotheses, and refining hypotheses in the light of experimental findings.  As science finds out more about the causes of human well-being in a wide variety of circumstances, we can speak with greater confidence about matters which, formerly, caused people to defer to either religion or philosophy.

Unsurprisingly, the book has stirred up a raucous hornet’s nest of criticism.  Harris addresses most of these criticisms inside the book itself (which suggests that many reviewers were failing to pay attention) and picks up the discussion again on his blog. He summarises his view as follows:

Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena… fully constrained by the laws of Nature (whatever these turn out to be in the end). Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science. On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

As Harris makes clear, this is far from being an abstract, other-worldly discussion.  Cultures are clashing all the time, with lots of dramatic consequences for human well-being.  Seeing these clashes, are we to be moral relativists (saying “different cultures are best for different peoples, and there’s no way to objectively compare them”) or are we to be moral realists (saying “some cultures promote significantly more human flourishing than others, and are to be objectively preferred as a result”)?  And if we are to be moral realists, do we resolve our moral arguments by deference to religious tradition, or by open-minded investigation of real-world connections (investigations such as those proposed, indeed,  by Tim Harford in “Adapt”)?  In the light of these questions, here are some arguments that deserve thought:

  • There’s a useful comparison between the science of human values (the project espoused by Harris), and a science of diets (what we should eat, in order to enjoy good health).  In both cases, we’re currently far from having all the facts.  And in both cases, there are frequently several right answers.  But not all diets are equally good.  Similarly, not all cultures are equally good.  And what makes one diet better than another will be determined by facts about the physical world – such as the likely effects (direct and indirect) of different kinds of fats and proteins and sugars and vitamins on our bodies and minds.  While people still legitimately disagree about diets, that’s not a reason to say that science can never answer such questions.  Likewise, present-day disagreements about specific causes of happiness, mental flourishing, and general sentient well-being, do not mean these causes fail to exist, or that we can never know them.
  • Likewise with the science of economics.  We’re still far from having a complete understanding of how different monetary and financial policies impact the long-term health of the economy.  But that doesn’t mean we should throw up our hands and stop searching for insight about likely cause and effect.  The discipline of economics, imperfect though it is, survives in an as-yet-incomplete state.  The same goes for political science too.  And, likewise, for the science of the moral landscape.
  • Attempts to reserve some special area of “moral insight” for religion are indefensible.  As Harris says, “How is it that most Jews, Christians, and Muslims are opposed to slavery? You don’t get this moral insight from scripture, because the God of Abraham expects us to keep slaves. Consequently, even religious fundamentalists draw many of their moral positions from a wider conversation about human values that is not, in principle, religious.”  (I especially recommend Harris’s excoriating demolition of surprisingly spurious arguments given by Francis Collins in his surprisingly widely respected book “The Language of God: A Scientist Presents Evidence for Belief“.)

Mindsight: The New Science of Personal Transformation, by Daniel Siegel

The next book on my list serves as a vivid practical illustration of the kind of scientifically-informed insight that Harris talks about – new insight about connections between the brain and mental well-being.  Mindsight: The New Science of Personal Transformation contains numerous case histories of people who:

  • Started off lacking one or more elements of mental well-being
  • Became a patient of the author, Dr Daniel Siegel – a Harvard-trained physician
  • Followed one or other program of mindfulness – awareness and monitoring of the patterns of energy and information flowing in the brain
  • Became more integrated and fulfilled as a result.

To quote from the book’s website:

“Mindsight” [is] the potent skill that is the basis for both emotional and social intelligence. Mindsight allows you to make positive changes in your brain–and in your life.

  • Is there a memory that torments you, or an irrational fear you can’t shake?
  • Do you sometimes become unreasonably angry or upset and find it hard to calm down?
  • Do you ever wonder why you can’t stop behaving the way you do, no matter how hard you try?
  • Are you and your child (or parent, partner, or boss) locked in a seemingly inevitable pattern of conflict?

What if you could escape traps like these and live a fuller, richer, happier life?  This isn’t mere speculation but the result of twenty-five years of careful hands-on clinical work by Daniel J. Siegel, M.D… one of the revolutionary global innovators in the integration of brain science into the practice of psychotherapy. Using case histories from his practice, he shows how, by following the proper steps, nearly everyone can learn how to focus their attention on the internal world of the mind in a way that will literally change the wiring and architecture of their brain.

Siegel is, of course, aware that drugs can often play a role in addressing mental issues.  However, his preference in many cases is for patients to learn and practice various skills in mental introspection.  His belief – which he backs up by reference to contemporary scientific findings – is that practices such as meditation can change the physical structure of brain in significant ways.  (And there are times when it can relieve recurring back pain too, as in one case history covered.)

Siegel defines the mind as “an embodied and relational process that regulates the flow of energy and information”.  He goes on to say:

So how would you regulate the mind?  By developing the ability to see mental activity with more clarity and then modify it with more effectiveness… there’s something about being able to see and influence your internal world that creates more health.

Out of the many books on psychotherapy that I’ve read over the years, this is one of the very best.  The case studies are described in sufficient depth to make them absorbing.  They’re varied, as well as unpredictable.  The neuroscience in the book is no doubt simplified at times, but gels well with what I’ve picked up elsewhere.  And the repeated emphasis on “integration” provides a powerful unifying theme:

[Integration is] a process by which separate elements are linked together into a working whole…  For example, integration is at the heart of how we connect to one another in healthy ways, honoring one another’s differences while keeping our lines of communication wide open. Linking separate entities to one another—integration—is also important for releasing the creativity that emerges when the left and right sides of the brain are functioning together.

Integration enables us to be flexible and free; the lack of such connections promotes a life that is either rigid or chaotic, stuck and dull on the one hand or explosive and unpredictable on the other. With the connecting freedom of integration comes a sense of vitality and the ease of well-being. Without integration we can become imprisoned in behavioral ruts—anxiety and depression, greed, obsession, and addiction.

By acquiring mindsight skills, we can alter the way the mind functions and move our lives toward integration, away from these extremes of chaos or rigidity. With mindsight we are able to focus our mind in ways that literally integrate the brain and move it toward resilience and health.

The sections in the book on meditation are particularly interesting.  As Siegel has become aware, the techniques he recommends have considerable alignment with venerable practices from various eastern traditions – such as the practice of “mindfulness“.  However, the attraction of these techniques isn’t that they are venerable.  It is that there’s a credible scientific explanation of why they work – an explanation that is bolstered by contemporary clinical experience.

Good Strategy Bad Strategy: The Difference and Why It Matters, by Richard Rumelt

From a great book on psychotherapy, let me finish by turning to a great book on strategy – perhaps the best book on strategy that I’ve ever read: Good Strategy Bad Strategy: The Difference and Why It Matters.  The author, Richard Rumelt, Professor of Business and Society at UCLA Anderson School of Management, is a veteran analyst of strategy, who gained his first degree as long ago as 1963 (in Electrical Engineering from the University of California, Berkeley).  He speaks with an accumulated lifetime of wisdom, having observed countless incidents of both “bad strategy” and “good strategy” over five decades of active participation in industry.

“Strategy” is the word which companies often use, when justifying their longer term actions.  They do various things, they say, in pursuit of their strategic objectives.  Here, “strategy” goes beyond “business case”.  Strategy is a reason for choosing between different possible business cases – and can provide reasons for undertaking projects even in the absence of a strong business case.  By the way, it’s not just companies that talk about strategy.  Countries can have them too, as well as departments within governments.  And the same applies to individuals: someone’s personal strategy can be an explicit reason for them choosing between different possible alternative courses of action.

It’s therefore a far from ideal situation that much of what people think of as a strategy is instead, in Rumelt’s words, “fluff” or “wishful thinking”:

It’s easy to tell a bad [strategy] from a good one. A bad one is full of fluff: fancy language covering up the lack of content. Enron’s so-called strategy was littered with meaningless buzzwords explaining its aim to evolve to a state of “sophisticated value extraction”. But in reality its chief strategies could be summed up as having an electronic trading platform, being an over-the-counter broker and acting as an information provider. These are not strategies, they are just names, like butcher, baker and candlestick maker…

Bad strategy is long on goals and short on policy or action.  It assumes that goals are all you need.  It puts forward strategic objectives that are incoherent and, sometimes, totally impractical.  It uses high-sounding words and phrases to hide these failings…

The core of [good] strategy work is always the same: discovering the critical factors in a situation and designing a way of coordinating and focusing actions to deal with those factors…

Bad strategy tends to skip over pesky details such as problems.  It ignores the power of choice and focus, trying instead of accommodate a multitude of conflicting demands and interests.  Like a quarterback whose only advice to teammates is “Let’s win”, bad strategy covers up its failure to guide by embracing the language of broad goals, ambition, vision, and values.  Each of these elements is, of course, an important part of human life.  But, by themselves, they are not substitutes for the hard work of strategy…

If you fail to identify and analyse the obstacles, you don’t have a strategy.  Instead, you have either a stretch goal, a budget, or a list of things you wish would happen.

The mention of a specific company above – Enron – is an example of a striking pattern Rumelt follows throughout his book: he names guilty parties.  Other “guilty parties” identified in the midst of fascinating narratives include CEOs of Lehman Brothers, International Harvester, Ford Motor Company, DEC, Telecom Italia, and metal box manufacturer Crown Cork & Seal.

Individuals that are highlighted, in contrast, as examples of good strategy include titans from military history – General Norman Schwarzkopf, Admiral Nelson, Hannibal, and Hebrew shepherd boy David (in his confrontation with Goliath) – as well as industry figures such as Sam Walton, Steve Jobs, Intel’s Andy Grove, IBM’s Lou Gerstner, and a range of senior managers at Cisco.  The tales recounted are in many ways already well known, but in each case Rumelt draws out surprising insight.  (Rumelt’s extended account of Hannibal’s victory over the Roman army at Cannae in 216 BC indicates many unexpected implications.)

Why do so many companies, government departments, and individuals have “bad strategy”?  Rumelt identifies four underlying reasons:

  • A psychological unwillingness or inability to make choices (this can be linked with an organisation being too decentralised)
  • A growing tide of “template style” strategic planning, which gives too much attention to vision, mission, and values, rather than to hard analysis of a company’s situation
  • An over-emphasis on charismatic qualities in leaders
  • The superficially appealing “positive thinking” movement.

Rumelt’s treatment of “positive thinking” is particularly illuminating – especially for a reader like me who harbours many sympathies for the idea that it’s important to maintain a positive, upbeat attitude.  Rumelt traces the evolution of this idea over more than a century:

This fascination with positive thinking, and its deep connection to inspirational and spiritual thought, was invented around 150 years ago in New England as a mutation of Protestant Christian individualism…

The amazing thing about [the ideology of positive thinking] is that it is always presented as if it were new!  And no matter how many times the same ideas are repeated, they are received by many listeners with fresh nods of affirmation.  These ritual recitations obviously tap into a deep human capacity to believe that intensely focused desire is magically rewarded…

I do not know whether meditation and other inward journeys perfect the human soul.  But I do know that believing … that by thinking only of success you can become a success, is a form of psychosis and cannot be recommended as an approach to management or strategy.  All [good] analysis starts with the consideration of what might happen, including unwelcome events.  I would not care to fly in an aircraft designed by people who focused only on an image of a flying machine and never considered modes of failure…

The doctrine that one can impose one’s visions and desires on the world by thought alone retains a powerful appeal to many people.  Its acceptance displaces critical thinking and good strategy.

As well as pointing out flaws in bad strategy, Rumelt provides wide-ranging clear advice on what good strategy contains:

A good strategy works by harnessing power and applying it where it will have the greatest effect.  In the short term, this may mean attacking a problem or rival with adroit combinations of policy, actions, and resources.  In the longer term, it may involve cleverly using policies or resource commitments to develop capabilities that will be of value in future contests.  In either case, a “good strategy” is an approach that magnifies the effectiveness of actions by finding and using sources of power…

Strategic leverage arises from a mixture of anticipation, insight into what is most pivotal or critical in a situation, and making a concentrated application of effort…

A much more effective way to compete is the discovery of hidden power in the situation.

Later chapters amplify these ideas by providing many illuminating suggestions for how to build an effective strategy.  Topics covered include proximate objectives, chain-link systems, design, focus (“pivot points”), competitive advantage, anticipation and exploitation of industry trends (“dynamics”), and inertia and entropy.  Here are just a few illustrative snippets from these later chapters:

In building sustained strategic advantage, talented leaders seek to create constellations of activities that are chain-linked.  This adds extra effectiveness to the strategy and makes competitive imitation difficult…

Many effective strategies are more designs than decisions – are more constructed than chosen..

When faced with a corporate success story, many people ask, “How much of the success was skill and how much was luck?”  The saga of Cisco Systems vividly illustrates that the mix of forces is richer than just skill and luck.  Absent the powerful waves of change sweeping through computing and telecommunications, Cisco would have remained a small niche player.  Cisco’s managers and technologists were very skillful at identifying and exploiting these waves of change…

An organisation’s greatest challenge may not be external threats or opportunities, but instead the effects of entropy and inertia.  In such a situation, organisational renewal becomes a priority.  Transforming a complex organisation is an intensely strategic challenge.  Leaders must diagnose the causes and effects of entropy and inertia, create a sensible guiding policy for effecting change, and design a set of coherent actions designed to alter routines, culture, and the structure of power and influence.

You can read more on the book’s website.

The book is addressed to people working within organisations, with responsibility for strategy in these organisations.  However, most of the advice is highly valid for individuals too.  Are the big personal goals we set ourselves merely “wishful thinking”, or are they grounded in a real analysis of our own personal situation?  Do they properly take account of our personal trends, inertia, entropy, and sources of competitive power?

8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

7 May 2011

Workers beware: the robots are coming

Filed under: books, challenge, disruption, Economics, futurist, robots — David Wood @ 9:07 pm

What’s your reaction to the suggestion that, at some stage in the next 10-30 years, you will lose your job to a robot?

Here, by the word “robot”, I’m using shorthand for “automation” – a mixture of improvements in hardware and software. The suggestion is that automation will continue to improve until it reaches the stage when it is cheaper for your employer to use computers and/or robots to do your job, than it is to continue employing you. This change has happened in the past with all manner of manual and/or repetitive work. Could it happen to you?

People typically have one of three reactions to this suggestion:

  1. “My job is too complex, too difficult, too human-intense, etc, for a robot to be able to do it in the foreseeable future. I don’t need to worry.”
  2. “My present job may indeed be outsourced to robots, but over the same time period, new kinds of job will be created, and I’ll be able to do one of these instead. I don’t need to worry.”
  3. “When the time comes that robots can do all the kinds of work that I can do, better than me, we’ll be living in an economy of plenty. I won’t actually need to work – I’ll be happy to enjoy lots more leisure time. I don’t need to worry.”

Don’t need to worry? Think again. That’s effectively the message in Martin Ford’s 2009 book “The lights in the tunnel“. (If you haven’t heard of that book, perhaps it’s because the title is a touch obscure. After all, who wants to read about “lights in a tunnel”?)

The subtitle gives a better flavour of the content: “Automation, accelerating technology, and the economy of the future“. And right at the top of the front cover, there’s yet another subtitle: “A journey to the economic landscape of the coming decades“. But neither of these subtitles conveys the challenge which the book actually addresses. This is a book that points out real problems with increasing automation:

  • Automation will cause increasing numbers of people to lose their current jobs
  • Accelerating automation will mean that robots can quickly become able to do more jobs – their ability to improve and learn will far outpace that of human workers – so the proportion of people who are unemployed will grow and grow
  • Without proper employment, a large proportion of consumers will be deprived of income, and will therefore lack the spending power which is necessary for the continuing vibrancy of the economy
  • Even as technology improves, the economy will stagnate, with disastrous consequences
  • This is likely to happen long before technologies such as nanotech have reached their full potential – so that any ideas of us existing at that time in an economy of plenty are flawed.

Although the author could have chosen a better title for his book, the contents are well argued, and easy to read. They deserve a much wider hearing.  They underscore the important theme that the process of ongoing technological improvement is far from being an inevitable positive.

There are essentially two core threads to the book:

  • A statement of the problem – this effectively highlights issues with each of the reactions 1-3 listed earlier;
  • Some tentative ideas for a possible solution.

The book looks backwards in history, as well as forwards to the future. For example, it includes interesting short commentaries on both Marx and Keynes. One of the most significant backward glances considers the case of the Luddites – the early 19th century manufacturing workers in the UK who feared that their livelihoods would be displaced by factory automation. Doesn’t history show us that such fears are groundless? Didn’t the Luddites (and their descendants) in due course find new kinds of employment? Didn’t automation create new kinds of work, at the same time as it destroyed some existing kinds of work? And won’t that continue to happen?

Well, it’s a matter of pace.  One of most striking pictures in the book is a rough sketch of the variation over time of the comparative ability of computers and humans to perform routine jobs:

As Martin Ford explains:

I’ve chosen an arbitrary point on the graph to indicate the year 1812. After that year, we can reasonably assume that human capability continued to rise quite steeply until we reach modern times. The steep part of the graph reflects dramatic improvements to our overall living conditions in the world’s more advanced countries:

  • Vastly improved nutrition, public health, and environmental regulations have allowed us to remain relatively free from disease and reach our full biological potential
  • Investment in literacy and in primary and secondary education, as well as access to college and advanced education for some workers, has greatly increased overall capability
  • A generally richer and more varied existence, including easy access to books, media, new technologies and the ability to travel long distances, has probably had a positive impact on our ability to comprehend and deal with complex issues.

A free download of the entire book is available from the author’s website.  I’ll leave it to you to evaluate the author’s arguments for why the two curves in this sketch have the shape that they do.  To my mind, these arguments have a lot of merit.

The point where these two curves cross – potentially a few decades into the future – will represent a new kind of transition point for the economy – perhaps the mother of all economic disruptions.  Yes, there will still be some new jobs created.  Indeed, in a blogpost last year, “Accelerating automation and the future of work“, I listed 20 new occupations that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

However, the lifetimes of these jobs (before they too can be handled by improved robots) will shrink and shrink.  For a less esoteric example, consider the likely fate of a relatively new profession, radiology.  As Martin Ford explains:

A radiologist is a medical doctor who specializes in interpreting images generated by various medical scanning technologies. Before the advent of modern computer technology, radiologists focused exclusively on X-rays. This has now been expanded to include all types of medical imaging, including CT scans, PET scans, mammograms, etc.

To become a radiologist you need to attend college for four years, and then medical school for another four. That is followed by another five years of internship and residency, and often even more specialized training after that. Radiology is one of the most popular specialties for newly minted doctors because it offers relatively high pay and regular work hours; radiologists generally don’t need to work weekends or handle emergencies.

In spite of the radiologist’s training requirement of at least thirteen additional years beyond high school, it is conceptually quite easy to envision this job being automated. The primary focus of the job is to analyze and evaluate visual images. Furthermore, the parameters of each image are highly defined since they are often coming directly from a computerized scanning device. Visual pattern recognition software is a rapidly developing field that has already produced significant results…

Radiology is already subject to significant offshoring to India and other places. It is a simple matter to transmit digital scans to an overseas location for analysis. Indian doctors earn as little as 10 percent of what American radiologists are paid… Automation will often come rapidly on the heels of offshoring, especially if the job focuses purely on technical analysis with little need for human interaction. Currently, U.S. demand for radiologists continues to expand because of the increase in use of diagnostic scans such as mammograms. However, this seems likely to slow as automation and offshoring advance and become bigger players in the future. The graduating medical students who are now rushing into radiology for its high pay and relative freedom from the annoyances of dealing with actual patients may eventually come to question the wisdom of their decision

Radiologists are far from being the only “high-skill” occupation that is under risk from this trend.  Jobs which involve a high degree of “expert system” knowledge will come under threat from increasingly expert AI systems.  Jobs which involve listening to human speech will come under threat from increasingly accurate voice recognition systems.  And so on.

This leaves two questions:

  1. Can we look forward, as some singularitarians and radical futurists assert, to incorporating increasing technological smarts within our own human nature, allowing us in a sense to merge with the robots of the future?  In that case, a scenario of “the robots will take all our jobs” might change to “substantially enhanced humans will undertake new types of work”
  2. Alternatively, if robots do much more of the work needed within society, how will the transition be handled, to a society in which humans have much more leisure time?

I’ll return to the first of these questions in a subsequent blogpost.  Martin Ford’s book has a lot to say about the second of these questions.  And he recommends a series of ideas for consideration:

  • Without large numbers of well-paid consumers able to purchase goods, the global economy risks going into decline, at the same time as technology has radically improved
  • With fewer people working, there will be much less income tax available to governments.  Taxation will need to switch towards corporation tax and consumption taxes
  • With more people receiving handouts from the state, there’s a risk of loss of many of aspects of economic structure which previously have been thought essential
  • We need to give more thought, now, to ideas for differential state subsidy of different kinds of non-work activity – to incentivise certain kinds of activity.  That way, we’ll be ready for the increasing disturbances placed on our economy by the rise of the robots.

For further coverage of these and related ideas, see Martin Ford’s blog on the subject, http://econfuture.wordpress.com/.

17 April 2011

Towards inner humanity+

Filed under: challenge, films, Humanity Plus, intelligence, vision — David Wood @ 11:06 am

There’s a great scene near the beginning of the film “Limitless“.  The central character, Eddie (played by Bradley Cooper), has just been confronted by his neighbour, Valerie. It’s made clear to the viewers that Valerie is generally nasty and hostile to Eddie. Worse, Eddie owes money to Valerie, and is overdue payment. It seems that a fruitless verbal confrontation looms. Or perhaps Eddie will try to quickly evade her.

But this time it’s different.  Eddie’s brain has been switched into a super-fast enhanced mode (which is the main theme of the film).  Does he take the opportunity to weaken Valerie with fast verbal gymnastics and put-downs?

Instead, he uses his new-found rocket-paced analytic abilities to a much better purpose.  Picking up the tiniest of clues, he realises that Valerie’s foul mood is caused by something unconnected with Eddie himself: Valerie is having a particular problem with her legal studies.  Gathering memories out of the depths of his brain from long-past discussions with former student friends, Eddie is able to suggest ideas to Valerie that rouse her interest and defuse her hostility.  Soon, she’s more receptive.  The two sit down together, and Eddie guides her in the swift completion of a brilliant essay for the tricky homework assignment that has been preying on Valerie’s nerves.

Anyone who watches Limitless is bound to wonder: can technology – such as a smart drug – really have that kind of radical transformative effect on human ability?

Humanity+ is the name of the worldview that says, not only is that kind of technology feasible (within the lifetimes of many people now alive), but it is desirable.  If you watch Limitless right through to the end, you’ll find plenty in the film that offers broad support to the Humanity+ mindset.  That’s a pleasant change from the usual Hollywood conviction that technology-induced human enhancement typically ends up in dysfunction and loss of important human characteristics.

But the question remains: if we become smarter, does it mean we would be better people?  Or would we tend to use accelerated mental faculties to advance our own self-centred personal agendas?

A similar question was raised by an audience member at the “Post Transcendent Man” event in Birkbeck in London last weekend.  Is it appropriate to consider intellectual enhancement without also considering moral enhancement?  Or is it like giving a five year old the keys to a sports car?  Or like handing a bunch of Mujahideen terrorists the instructions to create advanced nuclear weaponry?

Take another example of accelerating technology: the Internet.  This can be used to spy and to hassle, as well as to educate and uplift.  Consider the chilling examples mentioned in the recent Telegraph article “The toxic rise of internet bullies“:

At first glance, Natasha MacBryde’s Facebook page is nothing unusual. A pretty, slightly self-conscious blonde teenager gazes out, posed in the act of taking her own picture. But unlike other pages, this has been set up in commemoration, following her death under a train earlier this month. Now though it has had to be moderated after it was hijacked by commenters who mocked both Natasha and the manner of her death heartlessly.

“Natasha wasn’t bullied, she was just a whore,” said one, while another added: “I caught the train to heaven LOL [laugh out loud].” Others clicked on the “like” symbol, safe in their anonymity, to indicate that they agreed. The messages were removed after a matter of hours, but Natasha’s grieving father Andrew revealed that Natasha’s brother had also discovered a macabre video – entitled “Tasha The Tank Engine” on YouTube (it has since been removed). “I simply cannot understand how or why these people get any enjoyment or satisfaction from making such disgraceful comments,” he said.

He is far from alone. Following the vicious sexual assault on NBC reporter Lara Logan in Cairo last week, online debate on America’s NPR website became so ugly that moderator Mark Memmott was forced to remove scores of comments and reiterate the organisation’s stance on offensive message-posting…

It’s not just anonymous comments that cause concern.  As Richard Adhikari notes in his article “The Internet’s Destruction of Critical Thinking“,

Prior to the dawn of the Internet Age, anyone who wanted to keep up with current events could pretty much count on being exposed to a diversity of subjects and viewpoints. News consumers were passive recipients of content delivered by print reporters or TV anchors, and choices were few. Now, it’s alarmingly easy to avoid any troublesome information that might provoke one to really think… few people do more than skim the surface — and as they do with newspapers, most people tend to read only what interests them. Add to that the democratization of the power to publish, where anyone with access to the Web can put up a blog on any topic whatsoever, and you have a veritable Tower of Babel…

Of course, the more powerful the technology, the bigger the risks if it is used in pursuit of our lower tendencies.  For a particularly extreme example, review the plot of the 1956 science fiction film “Forbidden planet”, as covered here.  As Roko Mijic has explained:

Here are two ways in which the amplification of human intelligence could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

For all these reasons, it’s my strong conviction that any quest to what might be called “outer Humanity+” must be accompanied (and, indeed, preceded) by a quest for “inner Humanity+”.  Both these quests consider the ways in which accelerating technology can enhance human capabilities.  However the differences are summed up in the following comparison:

Outer Humanity+

  • Seeks greater strength
  • Seeks greater speed
  • Seeks to transcend limits
  • Seeks life extension
  • Seeks individual progress
  • Seeks more experiences
  • Seeks greater intelligence
  • Generally optimistic about technology
  • Generally hostile to goals and practice of religion and meditation

Inner Humanity+

  • Seeks greater kindness
  • Seeks deeper insight
  • Seeks self-mastery
  • Seeks life expansion
  • Seeks cooperation
  • Seeks more fulfilment
  • Seeks greater wisdom
  • Has major concerns about technology
  • Has some sympathy to goals and practice of religion and meditation

Back to Eddie in Limitless.  It’s my hunch he was basically a nice guy to start with – except that he was ineffectual.  Once his brainpower was enhanced, he could be a more effectual nice guy.  His brain provided rapid insight on the problems and issues being faced by his neighbour – and proposed effective solutions.  In this example, greater strength led to a more effective kindness.  But if real-life technology delivers real-life intellect enhancement any time soon, all bets are off regarding whether it will result in greater kindness or greater unkindness.  In other words, all bets are off as to whether we’ll create a heaven-like state, or hell on earth.  For this reason, the quest to achieve Inner Humanity+ must overtake the quest to achieve Outer Humanity+.

2 April 2011

Virtual futures and digital natives

Filed under: disruption, Events, futurist, Humanity Plus — David Wood @ 6:34 pm

A child born today will be immersed in a world that is, more than ever, virtual…  With a single Google search, a child has instant access to a plethora of information. With Google Earth the entire globe can be navigated with little travel-cost endured. And languages can be translated without a single understanding of the complex linguistics of other cultures…

These words are taken from the blog for the forthcoming University of Warwick Virtual Futures 2.0’11 conference.  The stated theme of the conference is “Digital natives: fear of the flesh?”.  The phrase “digital native” refers to someone young enough (in body or in spirit) to find themselves at home in the fast-evolving digital connected world.

But is anyone truly at home in this world?  The author of the blog, Luke Robert Mason, continues as follows, drawing on comments made by performance artist Stelarc who took part in an earlier Virtual Futures conference:

But this virtual world is also plagued by complexity – a complexity born of information which the  biological brain is not designed to comprehend.  As performance artist Stelarc stated in his early work, “It is time to question whether a bipedal, breathing body with binocular vision and a 1400cc brain is an adequate biological form. It cannot cope with the quantity, complexity and quality of information it has accumulated; it is intimidated by the precision, speed and power of technology and it is biologically ill-equipped to cope with its new extraterrestrial environment.”

The Virtual Futures 2.0 conference rekindles a series of trailblazing conferences that the University of Warwick hosted in 1994, 1995, and 1996, attracting upwards of 300 attendees:

These conferences questioned the future possibility of the ‘virtual’ and alluded towards the impact of emerging technologies on society and culture. They were, at their time, revolutionary…

The topics discussed at the conferences in the 90’s included chaos theory, geopolitics, feminism, nanotechnology, cyberpunk fiction, machine music, net security, military strategy, plastic surgery, hacking, bio-computation, cognition, cryptography & capitalism. These topics are still poignant today with perhaps the addition of genetics, bio-engineering, neuroscience, artificial intelligence, bio-ethics and social media.

Call for papers

The conference organisers have now issued a call for papers:

The revival aims to reignite the debates over the implications of new and future communication technologies on art, society and politics. The conference will take place on the 18th-19th June 2011 and include paper presentations, panels, performances, screenings and installations.

We welcome researchers, scholars and artists to submit proposals for papers and/or performances around this year’s theme of: “Digital Natives: Fear of the Flesh?”…

Please send proposals (250 words max) to papers@virtualfutures.co.uk by 1st May 2011.

Interested in presenting or performing at the event?  As for myself, I’m preparing a proposal to speak at the conference.  I’m thinking about speaking on the topic “Beyond super phones to super humans – a journey along the spectrum of personal commitment to radical technological transformation“.

I like the conference focus on “digital natives” but I’m less convinced about the “Fear of the flesh?” coda.  Yes, my human flesh has lots of limitations.  But I look ahead to far-reaching bodily improvement, rather than to leaving my flesh altogether behind.  Other radical futurists, in contrast, seem to eagerly anticipate a time when their mind will be entirely uploaded into a virtual world.  There’s ground for lots of debate here:

  • Are these visions credible?
  • Are these visions desirable?
  • How should such visions be evaluated, in a world full of pressing everyday problems?
  • Which of these personal futures should we prioritise?

No doubt these questions, along with many others, will be tackled at the event.

Note: Virtual Futures 2.0 is organised at the University of Warwick with support from the Institute for Advanced Teaching and Learning, the School of Theatre, Performance and Cultural Policy Studies, and the Centre for History of Medicine, in association with Humanity+ UK.

19 March 2011

A singularly fine singularitarian panel?

Filed under: futurist, Humanity Plus, Kurzweil, Singularity — David Wood @ 12:37 pm

In a moment, I’ll get to the topic of a panel discussion on the Singularity – a panel I’ve dubbed (for reasons which should become clear) “Post Transcendent Man“. It’s a great bunch of speakers, and I’m expecting an intellectual and emotional mindfest.  But first, some background.

In the relatively near future, I expect increasing numbers of people to navigate the sea change described recently by writer Philippe Verdoux in his article Transhumanists coming out of the closet:

It wasn’t that long ago that listing transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on, as interests on one’s CV might result in a bit of embarrassment.

Over just the past decade and a half, though, there seems to have been a sea change in how these issues are perceived by philosophers and others: many now see them as legitimate subjects of research; they have, indeed, acquired a kind of academic respectability that they didn’t previously possess.

There are no doubt many factors behind this shift. For one, it seems to be increasingly apparent, in 2011, that technology and biology are coming together to form a new kind of cybernetic unity, and furthermore that such technologies can be used to positively enhance (rather than merely alter) features of our minds and bodies.

In other words, the claim that humans can “transcend” (a word I don’t much like, by the way) our biological limitations through the use of enhancement technologies seems to be increasingly plausible – that is, empirically speaking.

Thus, it seems to be a truism about our contemporary world that technology will, in the relatively near future, enable us to alter ourselves in rather significant ways. This is one reason, I believe, that more philosophers are taking transhumanism seriously…

On a personal note, when I first discovered transhumanism, I was extremely skeptical about its claims (which, by the way, I think every good scientific thinker should be). I take it that transhumanism makes two claims in particular, the first “descriptive” and the second “normative”: (i) that future technologies will make it possible for us to radically transform the human organism, potentially enabling us to create a new species of technologized “posthumans”; and (ii) that such a future scenario is preferable to all other possible scenarios. In a phrase: we not only can but ought to pursue a future marked by posthumanity…

One factor that leads people to pay more serious attention to this bundle of ideas – transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on – is the increasing coverage of these ideas in thoughtful articles in the mainstream media.  In turn, many of these articles have been triggered by the film Transcendent Man by director Barry Ptolemy, featuring the groundbreaking but controversial ideas and projects of inventor and futurist Ray Kurzweil.  Here’s a trailer for the film:

The film has received interesting commentary in, among other places:

I had mixed views when watching the movie myself:

  • On the one hand, it contains a large number of profound sound bites – statements made by many of the talking heads on screen; any of these sound bites could, potentially, change someone’s life, if they reflect on the implications;
  • The film also covers many details of Kurzweil’s own biography, with archive footage of him at different stages of his career – this filled in many gaps in my own understanding, and gave me renewed respect for what he has accomplished as a professional;
  • On the other hand, although there are plenty of critical comments among the sound bites – comments highlighting potential problems or issues with Kurzweil’s ideas – the film never really lets the debate fly;
  • I found myself thinking – yes, that’s an interesting and important point, now let’s explore this further – but then the movie switched to a different frame.

The movie has its official UK premier at the London Science Museum on Tuesday 5th April.  Kurzweil himself will be in attendance, to answer questions raised by the audience.  The last time I checked, tickets were sold out.

Post Transcendent Man

To drill down more deeply into the potentially radical implications of Kurzweil’s ideas and projects, the UK chapter of Humanity+ has arranged an event in  Birkbeck College (WC1E 7HX), Torrington Square in Central London on the afternoon (2pm-4.15pm) of Saturday 9th April.  We’ll be in Malet Street lecture room B34 – which seats a capacity audience of 177 people.  For more details about logistics, registration, and so on, see the official event website, or the associated Facebook page.

The event is privileged to feature an outstanding set of speakers and panellists who represent a range of viewpoints about the Singularity, transhumanism, and human transcendence.  In alphabetical order by first name:

Dr Anders Sandberg is a James Martin research fellow at the Future of Humanity Institute at Oxford University. As a part of the Oxford Martin School he is involved in interdisciplinary research on cognitive enhancement, neurotechnology, global catastrophic risks, emerging technologies and applied rationality. He has been writing about and debating transhumanism, future studies, neuroethics and related questions for a long time. He is also an associate of the Oxford Centre for Neuroethics and the Uehiro Centre for Practical Ethics, as well as co-founder of the Swedish think tank Eudoxa.

Jaan Tallinn is one of the programmers behind Kazaa and a founding engineer of Skype. He is also a partner in Ambient Sound Investments as well as a member of the Estonian President’s Academic Advisory Board. He describes himself as singularitarian/hacker/investor/physicist (in that order). In recent years Jaan has found himself closely following and occasionally supporting the work that SIAI and FHI are doing. He agrees with Kurzweil in that the topic of Singularity can be extremely counterintuitive to general public, and has tried to address this problem in a few public presentations at various venues.

Nic Brisbourne is a partner at venture capital fund DFJ Esprit and blogger on technology and startup issues at The Equity Kicker. As such he’s interested in when technology and science projects become products and businesses. He has a personal interest in Kurzweil’s ideas and longevity in particular and he says he’s keen to cross the gap from personal to professional and find exciting startups generating products in this area, although he thinks that the bulk of the commercialisation opportunities are still a year or two out.

Paul Graham Raven is a writer, literary critic and bootstrap big-picture futurist; he prods regularly at the fuzzy boundary of the unevenly-distributed future at futurismic.com. He is Editor-in-Chief and Publisher of The Dreaded Press, a rock music reviews webzine, and Publicist and PR officer for PS Publishing – perhaps the UK’s foremost boutique genre publisher. He says he’s also a freelance web-dev to the publishing industry, a cack-handed fuzz-rock guitarist, and in need of a proper haircut.

Russell Buckley is a leading practitioner, speaker and thinker about mobile and mobile marketing. MobHappy, his blog about mobile technology, is one of the most established focusing on this area. He is also a previous Global Chairman of the Mobile Marketing Association, a founder of Mobile Monday in Germany and holds numerous non-executive positions in mobile technology companies. Russell learned about mobile advertising startup, AdMob, soon after its launch, and joined as its first employee in 2006, with the remit of launching AdMob into the EMEA market. Four years later, AdMob was sold to Google for $750m. By night though, Russell is fascinated by the socio-political implications of technology and recently graduated from the Executive Program at the Singularity University, founded by Ray Kurtzweil and Peter Diamandis to “educate and inspire leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges”.

The discussion continues

The event will start, at 2pm, with the panellists introducing themselves, and their core thinking about the topics under discussion.  As chair, I’ll ask a few questions, and then we’ll open up for questions and comments from the audience.  I’ll be particularly interested to explore:

  • How people see the ideas of accelerating technology making a difference in their own lives – both personally or professionally.  Three of us on the stage were on founding teams of companies that made sizeable waves in the technology world (Jaan Tallinn, Skype; Russell Buckley, AdMob; myself, Symbian).  Where do we see rapidly evolving technology (as often covered by Kurzweil) taking us next?
  • People’s own experiences with bodies such as the Singularity University, the Singularity Institute, and the Future of Humanity Institute at Oxford University.  Are these bodies just talking shops?  Are they grounded in reality?  Are they making a substantial positive difference in how humanity responds to the issues and challenges of technology?
  • Views as to the best way to communicate ideas like the Singularity – favourite films, science fiction, music, and other media.  How does the move “Transcendent Man” compare?
  • Reservations and worries (if any) about the Singularity movement and the ways in which Kurzweil expresses his ideas.  Are the parallels with apocalyptic religions too close for comfort?
  • Individuals’ hopes and aspirations for the future of technology.  What role do they personally envision playing in the years ahead?  And what timescales do they see as credible?
  • Calls to action – what (if anything) should members of the audience change about their lives, in the light of analysing technology trends?

Which questions do you think are the most important to raise?

Request for help

If you think this is an important event, I have a couple of suggestions for you:

The discussion continues (more)

Dean Bubley, founder of Disruptive Analysis and a colleague of mine from the mobile telecomms industry, has organised the “Inaugural UK Humanity+ Evening Salon” on Wednesday April 13th, from 7pm to 10pm.  Dean describes it as follows:

Interested in an evening discussing the future of the human species & society? Aided by a drink or two?

This is the first “salon” event for the London branch of “Humanity Plus”, or H+ for short. It’s going to be an informal evening event involving a stimulating guest speaker, Q&A and lively discussion, all aided by a couple of drinks. It fits alongside UKH+’s larger Saturday afternoon lecture sessions, and occasional all-day major conferences…

It will be held in central London, in a venue TBC closer to the time. Please contact Dean Bubley (facebook.com/bubley), the convener & moderator, for more details.

For more details, see the corresponding Facebook page, and RSVP there so that Dean has an idea of the likely numbers.

2 March 2011

On turkeys and eagles

Filed under: Microsoft, Nokia, partners, software management — David Wood @ 12:39 am

Two turkeys do not make an eagle“.  That was the response given in June 2005 by outspoken Nokia EVP Anssi Vanjoki, to the news that two of Nokia’s rivals – Siemens and BenQ – were to merge their mobile phone divisions.  “The integration of the handset units of the two companies is equivalent to one big problem meeting another”, Vanjoki remarked.  It was a remark that caused some consternation in the Symbian boardroom at the time, where both Siemens and Nokia were members.

Six years later, the same pointed saying bounced around the Internet again, this time in a tweet from Vic Gundotra, VP of Engineering at Google.  On this occasion, the alleged “turkeys” were Microsoft (where Gundotra himself had worked for 15 years) and Nokia.  The tweets anticipated the February 11th announcement of Nokia identifying Microsoft’s Windows Phone as its new primary smartphone platform.  Gundotra’s implication was that Nokia’s adoption of Windows Phone would end in tears.

Behind the barb and the humour, there’s a very serious question.  When do partnerships and mergers create genuine synergies, that add up to more than the sum of their parts?  And when do these partnerships, instead, just pile one set of problems onto another – akin to tethering two out-of-form runners together as a team in a three-legged race?

That question is particularly challenging in the world of software, where Brooks’s Law tends to apply: Adding manpower to a late software project makes it later.  This statement featured in Fred Brooks’s 1975 book “The mythical man month” – a book that my first software director asked me to read, as I was just starting on my professional career.  I vividly remember another statement by Brooks in that book: “Nine women can’t make a baby in one month.”

Having extra people in a project risks increasing the number of interconnections and the varieties of communications needed.  The homely proverb “Many hands make light work” implies that more software engineers would get the job done more quickly, but the equally venerable saying, “Too many cooks spoil the broth”, comes down on the side of Brooks’s Law.

Many of my industry colleagues and acquaintances tell me that the most productive, effective software teams they ever worked in were small teams, of perhaps only 7-50 people.  Seen in that light, there is some sense in Nokia’s decision to significantly reduce the size of its software development teams.  But what are the prospects for successful synergies from the newly minted software development partnership between Nokia and Microsoft?

Some differences between the development styles of Microsoft and Nokia are well known.  For example, Nokia spreads software development across many sites in many countries, whereas Microsoft prefers concentrated development at a smaller number of sites.  Relationships between developers and testers are much closer at Microsoft than at Nokia.  Microsoft keeps tight control of source code, whereas Nokia has been centrally involved in several open source communities (such as Qt, MeeGo, and Symbian).  Viewing these differences, what steps can be taken to heighten the chances of positive and lasting cooperation?

Given this background, I offer ten suggestions, for any pair of companies embarking on a major software partnership or merger:

  1. Don’t insist on uniformity. There’s no special need for the two partners to adopt the same tools, same processes, or same internal language.  It’s the interface between the two software teams that matters, rather than whether the teams look like clones of each other.
  2. Do insist on quality. Where the software from the two teams meet, that software needs to work well.  Quality comes well ahead of quantity.  Otherwise, huge amounts of time can be wasted in debugging issues arising during the integration of the two sets of software.
  3. Be ready to revise opinions. Expect there to be misunderstandings, or for new insight to emerge as software from the two teams comes into contact.  Be sure there’s sufficient nimbleness, on both sides, to revise their software quickly in the light of learnings at the point of contact.
  4. Become experts in interface management. This is the discipline of seeking to preserve compatibility of interface, even when new features are added, or performance is improved.  Here, the “interface” includes non-functional aspects of the definition of the software (including behaviour under stress or error conditions), as well as the original set of functionality.  And when interfaces do need to change, be sure that this fact is properly understood and properly communicated.
  5. Avoid feelings of “us and them”. Find ways to build positive bonds of trust between people on the two teams.  This can include staff interchange and secondments, and shared online community tools.
  6. Practice “tough love”. Where problems arise in the interaction between the two teams, it must be possible to speak frankly about these issues, rather than keeping mum for the sake of some kind of dysfunctional “peace and quiet”.  Provided a good fulcrum of shared trust has been established, robust discussion should lead to light rather than heat – a focus on “what is best” rather than “who is best”.
  7. Keep the end-to-end experience firmly in mind. Mainstream market adoption of software needs more than just neat technology.  The larger the overall software system, the greater the need for world-class product managers and designers to keep on top of the development project.  There’s no point in optimising individual parts of this system, if there’s a fundamentally defective link in the overall chain.
  8. Paint a compelling, credible picture of the benefits that the partnership will bring. Members of both teams must be inspired by the “why” of the partnership – something that will give them sufficient reason to dig deep when times become tough.  In practice, this picture will involve a roadmap of envisioned shared success points.
  9. Design and celebrate early successes from the relationship. Identify potential “quick wins” along the path to the anticipated future larger successes.  By the way, in case these quick wins fail to materialise, don’t sweep this fact under a carpet.  This is when the tough love needs to kick in, with both sides in the relationship being ready to revise opinions.
  10. Keep investing in the relationship. Don’t take it for granted.  Recognise that old habits and old mindsets will tend to re-assert themselves, as soon as management focus moves elsewhere.  Early wins are an encouraging sign, but relationship managers need to keep the momentum running, to avoid the partnership from stagnating.

Think that’s too much advice? Believe you could give attention to one or two of these pieces of advice, and “wing it” with the others? Think again. Major software partnership is major effort. Skimp on the details, and you’ll end up with two turkeys tied together as unhelpfully as in a three-legged race. Invest profoundly in the partnership, however, and you may indeed see the birth of a new eagle.

Note: the above advice would still apply, even in the taking-things-one-step-further “acquisition scenario” discussed by Andreas Constantinou in his latest VisionMobile article “Is Microsoft buying Nokia? An analysis of the acquisition endgame“.

« Newer PostsOlder Posts »

Blog at WordPress.com.