dw2

1 May 2010

Costs of complexity: in healthcare, and in the mobile industry

Filed under: books, business model, disruption, healthcare, innovation, modularity, simplicity — David Wood @ 11:56 am

While indeed there are economies of scale, there are countervailing costs of complexity – the more product families produced in a plant, the higher the overhead burden rates.

That sentence comes from page 92 of “The Innovator’s Prescription: A disruptive solution for health care“, co-authored by Clayton Christensen, Jerome Grossman, and Jason Hwang.  Like all the books authored (or co-authored) by Christensen, the book is full of implications for fields outside the particularly industry being discussed.

In the case of this book, the subject matter is critically important in its own right: how can we find ways to allow technological breakthroughs to reduce the spiralling costs of healthcare?

In the book, the authors brilliantly extend and apply Christensen’s well-known ideas on disruptive change to the field of healthcare.  But the book should be recommended reading for anyone interested in either strategy or operational effectiveness in any hi-tech industry.  (It’s also recommended reading for anyone interested in the future of medicine – which probably includes all of us, since most of us can anticipate spending increasing amounts of time in hospitals or doctor’s surgeries as we become older.)

I’m still less than half way through reading this book, but the section I’ve just read seems to speak loudly to issues in the mobile industry, as well as to the healthcare industry.

It describes a manufacturing plant which was struggling with overhead costs.  At this plant, 6.2 dollars were spent in overhead expenses for every dollar spend on direct labour:

These overhead costs included not just utilities and depreciation, but the costs of scheduling, expediting, quality control, repair and rework, scrap maintenance, materials handling, accounting, computer systems, and so on.  Overhead comprised all costs that were not directly spent in making products.

The quality of products made at that plant was also causing concern:

About 15 percent of all overhead costs were created by the need to repair and rework products that failed in the field, or had been discovered by inspectors as faulty before shipment.

However, it didn’t appear to the manager that any money was being wasted:

The plant hadn’t been painted inside or out in 20 years.  The landscaping was now overrun by weeds.  The receptionist in the bare-bones lobby had been replaced long ago with a paper directory and a phone.  The manager had no secretarial assistance, and her gray World War II vintage steel desk was dented by a kick from some frustrated predecessor.

Nevertheless, this particular plant had considerably higher overhead burden rates than the other plants from the same company.  What was the difference?

The difference was in the complexity.  This particular plant was set up to cope with large numbers of different product designs, whereas the other plants (which had been created later) had been able to optimise for particular design families.

The original plant essentially had the value proposition,

We’ll make any product that anyone designs

In contrast, the newer plants had the following kind of value proposition:

If you need a product that can be made through one of these two sequences of operations and activities, we’ll do it for you at the lowest possible cost and the highest possible quality.

Further analysis, across a number of different plants, reached the following results:

Each time the scale of a plant doubled, holding the degree of pathway complexity constant, the overhead rate could be expected to fall by 15 percent.  So, for example, a plant that made two families and generated $40 million in sales would be expected to have an overhead burden ratio of about 2.85, while the burden rate for a plant making two families with $80 million in sales would be 15% lower (2.85 x 0.85 = 2.42).  But every time the number of families produced in a plant of a given scale doubled, the overhead burden rate soared 27 percent.  So if a two-pathway, $40 million plant accepted products that required two additional pathways, but that did not increase its sales volume, its overhead burden rate would increase by 2.85 x 1.27, to 3.62…

This is just one aspect of a long and fascinating analysis.  Modern day general purpose hospitals support huge numbers of different patient care pathways, so high overhead rates are inevitable.  The solution is to allow the formation of separate specialist units, where practitioners can then focus on iteratively optimising particular lines of healthcare.  We can already see this in firms that specialise in laser eye surgery, in hernia treatment, and so on.  Without these new units separating and removing some of the complexity of the original unit, it becomes harder and harder for innovation to take place.  The innovation becomes stifled under conflicting business models.  (I’m simplifying the argument here: please take a look at the book for the full picture.)

In short: reducing overhead costs isn’t just a matter of “eliminating obvious inefficiencies, spending less time on paperwork, etc”.  It often requires initially painful structural changes, in which overly complex multi-function units are simplified by the removal and separation of business lines and product pathways.  Only with the new, simplified set up – often involving new companies, and sometimes involving “creative destruction” – can disruptive innovations flourish.

Rising organisational complexity impacts the mobile industry too.  I’ve written about this before.  For example, in May last year I wrote an article “Platform strategy failure modes“:

The first failure mode is when a device manufacturer fails to have a strategy towards mobile software platforms.  In this case, the adage holds true that a failure to strategise is a strategy to fail.  A device manufacturer that simply “follows the wind” – picking platform P1 for device D1 because customer C1 expressed a preference for P1, picking platform P2 for device D2 because customer C2 expressed a preference for P2, etc – is going to find that the effort of interacting successfully with all these different platforms far exceeds their expectations.  Mobile software platforms require substantial investment from manufacturers, before the manufacturer can reap commercial rewards from these platforms.  (Getting a device ready to demo is one thing.  That can be relatively easy.  Getting a device approved to ship onto real networks – a device that is sufficiently differentiated to stand out from a crowd of lookalike devices – can take a lot longer.)

The second failure mode is similar to the first one.  It’s when a device manufacturer spreads itself  too thinly across multiple platforms.  In the previous case, the manufacturer ended up working with multiple platforms, without consciously planning that outcome.  In this case, the manufacturer knows what they are doing.  They reason to themselves as follows:

  • We are a highly competent company;
  • We can manage to work with (say) three significant mobile software platforms;
  • Other companies couldn’t cope with this diversification, but we are different.

But the outcome is the same as the previous case, even though different thinking gets the manufacturer into that predicament.  The root failure is, again, a failure to appreciate the scale and complexity of mobile software platforms.  These platforms can deliver tremendous value, but require significant ongoing skill and investment to yield that kind of result.

The third failure mode is when a manufacturer seeks re-use across several different mobile software platforms.  The idea is that components (whether at the application or system level) are developed in a platform-agnostic way, so they can fit into each platform equally well.

To be clear, this is a fine goal.  Done right, there are big dividends.  But my observation is that this strategy is hard to get right.  The strategy typically involves some kind of additional “platform independent layer”, that isolates the software in the component from the particular programming interfaces of the underlying platform.  However, this additional layer often introduces its own complications…

Seeking clever economies of scale is commendable.  But there often comes time when growing scale is bedevilled by growing complexity.  It’s as mentioned at the beginning of this article:

While indeed there are economies of scale, there are countervailing costs of complexity – the more product families produced in a plant, the higher the overhead burden rates.

Even more than a drive to scale, companies in the mobile space need a drive towards simplicity. That means organisational simplicity as well as product simplicity.

As I stated in my article “Simplicity, simplicity, simplicity“:

The inherent complexity of present-day smartphones risks all kinds of bad outcomes:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window;
  • Smartphone application development may become difficult, as developers need to juggle different programming interfaces and optimisation methods;
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.  The number one issue for the mobile industry, arguably, is to constantly find better ways to tame this complexity.

The companies that are successfully addressing the complexity issue seem, on the whole, to be the ones on the rise in the mobile space.

Footnote: It’s a big claim, but it may well be true that of all the books on the subject of innovation in the last 20 years, Clayton’s Christensen’s writings are the most consistently important.  The subtitle of his first book, “The innovator’s dilemma”, is a reminder why: “When new technologies cause great firms to fail“.

25 April 2010

Practical magic

Filed under: communications, Events, Humanity Plus, magic, marketing, UKH+ — David Wood @ 10:26 pm

I won’t reveal the content of the tricks.  That would be unfair on the performer.

Our dining group at Soho’s Little Italy restaurant had been pleasantly surprised by the unannounced entrance of a lady magician, before the orders for coffee were taken.  Where were we from, she asked.  Answers followed, hesitatingly: Belgium, Germany, Sweden, New York, London…

The atmosphere changed from guarded politeness to unguarded amazement as the magician blazed her way through some fast-paced sleight of hand with newspapers, water, money, ribbons, and playing cards.  Many of our group of hardened rationalists and technophiles were gasping with astonishment.  How did she do that?

It was a fitting end to a day that had seen a fair share of other kinds of magic.

Despite my nervous forebodings from earlier in the week, the Humanity+ UK2010 event had seen a 100% turn out of speakers, ran (near enough) to time, and covered a vast range of intriguing ideas about forthcoming new technology and the enhancement of humanity.  An audience of approaching 200 people in London’s Conway Hall seemed to find much to think about, from what they’d heard.  Here’s a brief sample of online feedback so far:

Awesome conference – all your work paid off and then some!

Great conference today #hplusuk : thank you!

Enjoyed H+ event, esp @anderssandberg preso. Learnt about singularity, AI+, wireheads, future shock, SENS, protocells & more

Most enjoyable conference today. Thanks to the organisers and speakers

A few hours literally day dreaming, blown away by human cleverness.  These people should be allowed to talk on prime time on BBC regularly

Humanity+ today was terrific. I particulary enjoyed the talks from Amon Twyman – Expanding perception and transhumanist art, Natasha Vita-More – DIY Enhancement, Aubrey de Grey’s Life Expansion and Rachel Armstrong’s Living Technology

Great talk @davidorban how the #internetofthings could free us to be human again. Couldn’t agree more. #hplusuk

Love David Pearce, a true visionary! #hplusuk

Behind the scenes, a team of volunteers were ensuring that things ran as smoothly as possible – with a very early start in the morning following a late evening the previous day.  In my professional life over the years I’ve often been responsible for major events, such as the Symbian developer events and smartphone shows, where I had visibility of the amount of work required to make an event a success.  But in all these cases, I had a team of events managers working for me – including first-class professionals such as Amy Graller, Jo Butler, Liza Fox, and Alice Kenny, as well as brand managers, PR managers, and so on.  These teams shielded me from a great deal of the underlying drama of managing events.  In contrast, this time, our entire team were volunteers, and there was no alternative to getting our own hands dirty!  Huge amounts of thanks are due to everyone involved in pulling off this piece of magic.

Needless to say, some things fell short of perfection.  I heard mild-mannered grumbles:

  • That there wasn’t enough time for audience Q&A – and that too many of the questions that were raised from the floor were imprecise or unfocused;
  • That the audio from our experimental live streaming from the event was too choppy – due to shortcomings in the Internet connectivity from the event (something that will need to be fixed before I consider holding another similar event there);
  • That some of the presentations had parts that were too academic for some members of the audience, or assumed more background knowledge than people actually possessed;
  • That there should have been more journalists present, hearing material that deserves wide coverage.

The mail list used by the Humanity+ UK organising team is already reflecting on “what went well” and “what could be improved”.  Provisionally, we have in mind a follow-up event early next year.  We’re open for suggestions!  What scale should we have in mind?  What key objectives?

Because I was rushing around on the day, trying to ensure everything was ready for the next phase of the event, I found myself unable to concentrate for long on the presentations themselves.  (I’ll need to watch the videos of the talks, once they’re available.)  However, a few items successfully penetrated my mental fog.  I was particularly struck by descriptions of potential engineering breakthroughs:

This kind of information appeals to the engineer in me.  It’s akin to “practical magic”.

I was also struck by discussions of flawed societal priorities, covering instances where publications give undue prominence to matters of low importance, to the exclusion of more accurate coverage of technological issues.  For example, Nick Bostrom reported, during his talk “Reducing Existential Risks” that there are more scholarly papers on dung beetle reproduction than on the possibilities of human extinction.  And Aubrey de Grey gave examples of sensationalist headlines even in a normally responsible newspaper, for anti-aging news of little intrinsic value, whilst genuinely promising news receives scant coverage.

What is the solution to this kind of broken prioritisation? The discussion among the final speaker panel of the day helped to distill an answer.  The Humanity+ organisation, along with those who support its aims, need to become better at the discipline of marketing. Once we convey our essential messages more effectively, society as a whole should hear and understand what we are saying, and respond positively.  There’s a great art – and great skill – to the practice of communication.

Some people dislike the term “marketing”, as if it’s a swear word.  But I see it as follows.  In general terms, “marketing” for any organisation means:

  • Deciding on a strategic focus – as opposed to a scattergun approach;
  • Understanding how various news items or other pieces of information or activism might be received by people in the wider community;
  • Finding better ways to convey the chosen key messages;
  • Engaging within the wider community – listening more than talking – and learning in the light of that conversation;
  • Repeating the above steps, with increasingly better understanding and better execution.

At 5pm, we had to hurriedly leave the venue, because it was needed for another function starting at 6pm.  It was hard to move everyone outside the main hall, since there were so many intense group discussions happening.  Eventually, some of us started on a 20 minute walk through central London, from Holborn to Soho, for the post-event dinner at Little Italy.  The food was delicious, the waitresses coped well (and with many friendly smiles) with all our many requests, and the conversation was first class.  The magician provided a great interlude.  I left the restaurant, several hours later, with a growing list of suggestions for topics for talks in the normal UKH+ monthly meetings that could bring in a good audience.  Happily, I also have a growing list of names of people who want to provide more active assistance in building an enhanced community of supporters of the aims of Humanity+.

21 April 2010

Designing the Internet of Things

Filed under: Internet of Things, mashup* event, Mobile Monday — David Wood @ 2:10 pm
  • Computers; smartphones; … smart things.
  • The Internet; the mobile Internet; … the Internet of Things.

These two epic trends tell aspects of the same grand story.  First, computing power is becoming more widespread, more affordable, more compact, and more miniature.  Second, networked intelligence is becoming more widespread, more affordable, more effective, and more informed.

As a result, we can look forward to a time, in just a few years, where each of us owns more than a dozen different devices that communicate with each other, wirelessly and transparently.  That will take the number of wireless modems in use in the world to upwards of 50 billion.

Going further, there are forecasts of no fewer than one trillion wirelessly connected “things”, where this time the connection will often involve simpler connectivity such as RFID.  As reported recently in Wireless Week:

There will be 1 trillion devices connected to the Internet by 2013, said Cisco Chief Technology Officer Padma Warrior during her Wednesday keynote address at CTIA.

Warrior argued the boom in connected devices, applications and mobile broadband would change not only the wireless industry but society in general.

“The Internet is no longer just an information superhighway, it’s a platform,” Warrior said, citing the increased adoption of M2M technologies and the exponential growth of apps…

To prove her point, Warrior moved through a series of technology demonstrations with a Cisco colleague that detailed what it would be like to interact with next-generation mobile technology.

The pair showed off augmented reality in a subway system; location-based advertising and mobile coupons; and a mobile telepresence app.

The next big revolution that will happen is the Internet of things,” Warrior said…

Evidently, the ever lower costs and increased quality of computing and connectivity are opening all kinds of new opportunities.  It’s easy to speculate on possibilities:

  • Distributed arrays of sensors that can more reliably – and more quickly – highlight the changing concentrations of volcanic ash;
  • Luggage tags that know (and can report) where your luggage is;
  • Air conditioning units and heating units that can coordinate to act in concert, rather than independently;
  • A handheld toothbrush that can let you know if you’re not putting enough effort into cleaning the innersides of your lower right molars;
  • Smart sticking plasters that detect microscopic changes in skin condition or blood flow;
  • A monitor that can detect if you are too distracted (or too dozy) to drive safely.  (Even better, put the driving intelligence into the car itself, rather than rely on human drivers.)
  • Surveillance cameras that can analyse what they are filming, being alert for security abnormalities;
  • Audio recording devices that can understand what they are hearing;
  • Smart glasses that can interpret what you’re looking at;
  • Smart digital signs that change their display depending on who’s looking at them;
  • And all of these devices connected together…

Does this sound good to you? We can debate some of the points, but overall, it’s clear this grand technology trend has great power to improve health, education, transport, the environment, and human experience generally.

So why isn’t it happening faster? Why is it that, as an industry colleague said to me recently, this whole field is in a state of chaos?

It’s chaotic, because we don’t know what’s happening next, nor how fast it’s happening.  The use cases developers identify as important at the start of a project often turn out to be less significant than ones that turn up, unexpected, part way through the project.  (That’s not necessarily a problem.  It is, of course, a large opportunity.)

In short, although lots of the underlying technology is mature, the emerging industry of the “Internet of Things” is still far from mature.  The roadmaps of product development remain tentative and sketchy.  Speedy progress will depend on:

  • A few underlying technology issues – such as network interoperability, smart distribution of tasks across multiple processors, power management, power harvesting, and security;
  • Some pressing business model issues – since not all existing players are excited by the prospects of cost-savings which would in turn reduce their profits from existing products;
  • Some ecosystem management issues – to solve “chicken and egg” scenarios where multiple parts of a compound solution all need to be in place, before the full benefits can be realised;
  • Some project development agility issues – to avoid wasted investment in cases where project goals change part way through, due to the uncertain territory being navigated;
  • Some significant design issues – to ensure that the resulting products can be widely accepted and embraced by “normal people” (as opposed just to early adopter technology enthusiasts).

These are some of the themes that I will seek to explore as one of the speakers at the mashup* event in London on Tuesday 4th May, entitled “Internet of things: Rise of the machines“.  In addition to myself, the other announced speakers are:

Many thanks to the mashup* team for organising this get-together!  I hope to see you there.

Footnote: The ReadWriteWeb publish a useful series of articles about the Internet of Things.  At time of writing, the most recent article in this series is “Internet of Things Can Make Us Human Again“, which highlights the ideas and work of David Orban, Founder of WideTag.  Here’s a brief extract:

Orban’s dream is that thousands of years of human subservience to machines will end because we will teach our machines how to not only take care of themselves, but how to take care of us as well…

These new machine networks will be so redundant and reliable that we will be freed from most of our machine-operating duties. We will get to be human again…

Extending these ideas, David recently spoke on the theme “Free to be human” at Mobile Monday Amsterdam.  It’s a great introduction to many Internet of Things ideas:

By good fortune, David will be in London this weekend, since he’s one of the speakers at the Humanity+ UK2010 event.  He’ll be addressing the Internet of Things once again, along with some thoughts on the progress of the Singularity University.  In my experience, he’s a fascinating person to talk with.

20 April 2010

Creative chaos under the ash cloud

Filed under: challenge, chaos, Humanity Plus, precautionary principle, risks, volcano — David Wood @ 11:45 pm

Seven months of careful planning looked like they were unravelling, in the final seven days.

Discussions about a gathering of futurist and transhumanist thinkers in London’s Conway Hall, on April 24th, have been underway for seven months.  Behind the scenes, we’ve had a planning wiki, a mailing list, and a small group of volunteers each chipping in with suggestions and undertaking different tasks.  A website for the event went live on 19th January, and we started taking registrations a week after that.  Registrations built up, and up, so that I could finally feel comfortable putting my name to the following quote on a press release we issued, “Unprecedented gathering of futurist and transhumanist thinkers in London“:

The UK chapter of Humanity+, an organisation dedicated to promoting understanding, interest and participation in fields of emerging innovation that can radically benefit the human condition, announced today that registrations are on track for record attendance at the Humanity+ UK2010 conference taking place in Conway Hall, Holborn, London, on April 24th.

“Approaching 200 attendees are expected to take part in a full day of thought-provoking lectures, discussions, Q&A, and breakouts, led by a line-up of world class futurist speakers”, said David Wood, H+UK meetings secretary.  “Participants have registered from as far afield as Poland, Sweden, Croatia, Portugal, Germany, Belgium, Holland, Ireland, and the USA.  The Humanity+ movement, previously known as the World Transhumanist Association, is coming of age.”…

However, on the very day of the press release, airplane flight restrictions were announced, for fear of damage from volcanic ash from Eyjafjallajökull in Iceland.

At first, I wasn’t particularly worried.  I thought that only three of the ten speakers were overseas, and that there would be plenty of time for flights to resume before the conference.

But the speakers are actors on the global stage, much in demand around the world.  And I gradually learned that no fewer than six of the ten were stranded overseas – in Venice, Montreal, San Francisco, and so on.  And the airplane flight restrictions kept getting extended.  My heart sunk.

I half-imagined that nature was saying:

You Humanity+ people think you can do ‘better than nature‘. Pah!  Take that!

What depressed me most was that initial tests at the venue had already suggested that Internet connectivity in Conway Hall was poor.  So ideas of speakers delivering their presentations via video link seemed impractical.

But necessity is the mother of invention.  Since there was a real possibility that members of both speakers and audience wouldn’t be able to travel to London, we were obliged to reconsider options for Internet connectivity.  And this gives us the possibility for the meeting to rise above being a London-based event, into a happening with a real-time online presence.

I tweeted: What’s the best way to install, for one day, a temporary high bandwidth connection to a conference venue (in London, UK)?

Answers came, fast and varied.  With help from a couple of people from the H+UK event planning team, I followed up about half a dozen different ideas.  The Conway Hall administrators also proved very flexible and helpful.  In a way, it’s still too early to say, but it now looks as though we’re set up:

  • To support remote speakers doing Skype video calls into the event, with the screen on stage showing, sometimes their face, and sometimes their slides;
  • And, to broadcast a live video stream of the event, on a service such as Ustream.tv.

So maybe technology can work around the ravages of nature after all! (At least in small scale.  And, in the decades ahead, in ever larger scale.)

The ash cloud raises other questions relevant to transhumanism – especially how to deal with risk.

One moment, I was in email correspondence about conference logistics with the opening keynote speaker for the event, Max More.  Max is on public record as being critical of the precautionary principleA few moments later, I was watching the BBC news, where a “Cambridge volcano scientist” (I didn’t catch his name) was explaining that there’s something called the precautionary principle which means that aircraft flights through the ash cloud had to be forbidden.  My mind did a quick double take.

To back up: Wikipedia describes the precautionary principle as follows:

The precautionary principle states that if an action or policy has a suspected risk of causing harm to the public or to the environment, in the absence of scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those who advocate taking the action.

This principle allows policy makers to make discretionary decisions in situations where there is evidence of potential harm in the absence of complete scientific proof. The principle implies that there is a social responsibility to protect the public from exposure to harm, when scientific investigation has found a plausible risk. These protections can be relaxed only if further scientific findings emerge that provide sound evidence that no harm will result.

In his 2005 article “THE PROACTIONARY PRINCIPLE“, Max offers the following criticisms of the precautionary principle:

The precautionary principle has at least six major weak spots. It serves us badly by:

  1. assuming worst-case scenarios
  2. distracting attention from established threats to health, especially natural risks
  3. assuming that the effects of regulation and restriction are all positive or neutral, never negative
  4. ignoring potential benefits of technology and inherently favoring nature over humanity
  5. illegitimately shifting the burden of proof and unfavorably positioning the proponent of the activity
  6. conflicting with more balanced, common-law approaches to risk and harm.

What should we conclude about the wisdom of shutting down the airspace above the UK, on precautionary grounds?  That’s a good question to ask.  If you take part in the event this Saturday, you’ll have the chance to ask Max himself about that point.  (Especially since it now appears the airplanes are flying again, after all.)

Footnote: while writing this blog, I came across, for the first time, Max’s fine 1999 essay “A Letter to Mother Nature“.  It’s well worth reading.  Here’s how it starts:

Dear Mother Nature:

Sorry to disturb you, but we humans—your offspring—come to you with some things to say. (Perhaps you could pass this on to Father, since we never seem to see him around.) We want to thank you for the many wonderful qualities you have bestowed on us with your slow but massive, distributed intelligence. You have raised us from simple self-replicating chemicals to trillion-celled mammals. You have given us free rein of the planet. You have given us a life span longer than that of almost any other animal. You have endowed us with a complex brain giving us the capacity for language, reason, foresight, curiosity, and creativity. You have given us the capacity for self-understanding as well as empathy for others.

Mother Nature, truly we are grateful for what you have made us. No doubt you did the best you could. However, with all due respect, we must say that you have in many ways done a poor job with the human constitution. You have made us vulnerable to disease and damage. You compel us to age and die—just as we’re beginning to attain wisdom. You were miserly in the extent to which you gave us awareness of our somatic, cognitive, and emotional processes. You held out on us by giving the sharpest senses to other animals. You made us functional only under narrow environmental conditions. You gave us limited memory, poor impulse control, and tribalistic, xenophobic urges. And, you forgot to give us the operating manual for ourselves!

What you have made us is glorious, yet deeply flawed. You seem to have lost interest in our further evolution some 100,000 years ago. Or perhaps you have been biding your time, waiting for us to take the next step ourselves. Either way, we have reached our childhood’s end.

We have decided that it is time to amend the human constitution.

We do not do this lightly, carelessly, or disrespectfully, but cautiously, intelligently, and in pursuit of excellence. We intend to make you proud of us. Over the coming decades we will pursue a series of changes to our own constitution, initiated with the tools of biotechnology guided by critical and creative thinking. In particular, we declare the following seven amendments to the human constitution…

16 April 2010

Mobile Developer TV: riffs on the future of technology

Filed under: Barcelona, futurist, Humanity Plus, YouTube — David Wood @ 3:03 pm

On the last day of  the Mobile World Congress (MWC) industry tradeshow in Barcelona a few weeks ago, Ewan MacLeod of Mobile Industry Review and Rafe Blandford of AllAboutSymbian caught up with me.  They explained:

We’re asking people what they see as the highlights of Mobile World Congress.  Would you mind saying a few words to camera?

I have lots of respect for both Ewan and Rafe, so I was happy to respond.  I expressed a few top-of-mind thoughts about Microsoft Windows Phone 7, the networking opportunities at the event itself, and about the growing interest in embedded connectivity (also known as “machine to machine” communications).  The result is here, as Episode 148 of MobileDeveloperTV.com: “David Wood’s take on MWC“:

As you can see, I had the opportunity to say a few words at the end of the clip about the Humanity+ UK2010 event I’ve been organising.  Once the filming stopped, the three of us continued chatting informally about this topic – which is (of course) a big and fascinating topic.  Never someone to miss an opportunity, Ewan started filming again. The first question this time was “What films about the future do you like?”  One answer led on to “just one more question” and then to “a final question” and even “a really final question”…

This became episode 149 of  MobileDeveloperTV.com: “David Wood speculates on the future of (mobile) technology“.  Ewan explains:

I grabbed the opportunity to ask David what his top 3 sci-fi movies were. What follows is an absolutely fascinating ‘real-time’ riff from David on where he sees the future going — in terms of technology augmentation — and what to do about the human race becoming far too reliant on technology that may well turn against us. Or that we simply couldn’t do without.

Many thanks to Ewan and Rafe for taking the time to edit and publish this second video, even though it’s some way outside their normal field of coverage!

15 April 2010

Accelerating automation and the future of work

Filed under: AGI, Economics, futurist, Google, politics, regulation, robots — David Wood @ 2:45 am

London is full of pleasant surprises.

Yesterday evening, I travelled to The Book Club in Shoreditch, EC2A, and made my way to the social area downstairs.  What’s your name? asked the person at the door.  I gave my name, and in return received a stick-on badge saying

Hi, I’m David.

Talk to me about the future of humanity!

I was impressed.  How do they know I like to talk to people about the future of humanity?

Then I remembered that the whole event I was attending was under the aegis of a newly formed group calling itself “Future Human“.  It was their third meeting, over the course of just a few weeks – but the first I had heard about (and decided to attend).  Everyone’s badge had the same message.  About 120 people crammed into the downstairs room – making it standing room only (since there were only around 60 seats).  Apart from the shortage of seats, the event was well run, with good use of roaming mikes from the floor.

The event started with a quick-fire entertaining presentation by author and sci-fi expert Sam Jordison.  His opening question was blunt:

What can you do that a computer can’t do?

He then listed lots of occupations from the past which technology had rendered obsolete.  Since one of my grandfathers was the village blacksmith, I found a personal resonance with this point.  It will soon be the same for many existing professions, Sam said: computers are becoming better and better at all sorts of tasks which previously would have required creative human input.  Journalism is particularly under threat.  Likewise accountancy.  And so on, and so on.

In general terms, that’s a thesis I agree with.  For example, I anticipate a time before long when human drivers will be replaced by safer robot alternatives.

I quibble with the implication that, as existing jobs are automated, there will be no jobs left for humans to do.  Instead, I see that lots of new occupations will become important.  “Shape of Jobs to Come”, a report (PDF) by Fast Future Research, describes 20 jobs that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

(See the original report for explanations of some of these unusual occupation names!)

In other words, as technology improves to remove existing occupations, new occupations will become significant – occupations that build in unpredictable ways on top of new technology.

But only up to a point.  In the larger picture, I agree with Sam’s point that even these new jobs will quickly come under the scope of rapidly improving automation.  The lifetime of occupations will shorten and shorten.  And people will typically spend fewer hours working each week (on paid tasks).

Is this a worry? Yes, if we assume that we need to work long hours, to justify our existence, or to earn sufficient income to look after our families.  But I disagree with these assumptions. Improved technology, wisely managed, should be able to result, not just in less labour left over for humans to do, but also in great material abundance – plenty of energy, food, and other resources for everyone.  We’ll become able – at last – to spend more of our time on activities that we deeply enjoy.

The panel discussion that followed touched on many of these points. The panellists – Peter Kirwan from Wired, Victor Henning from Mendeley, and Carsten Sorensen and Jannis Kallinikos from the London School of Economics – sounded lots of notes of optimism:

  • We shouldn’t create unnecessary distinctions between “human” and “machine”.  After all, humans are kinds of machines too (“meat machines“);
  • The best kind of intelligence combines human elements and machine elements – in what Google have called “hybrid intelligence“;
  • Rather than worrying about computers displacing humans, we can envisage computers augmenting humans;
  • In case computers become troublesome, we should be able to regulate them, or even to switch them off.

Again, in general terms, these are points I agree with.  However, I believe these tasks will be much harder to accomplish than the panel implied. To that extent, I believe that the panel were too optimistic.

After all, if we can barely regulate rapidly changing financial systems, we’ll surely find it even harder to regulate rapidly changing AI systems.  Before we’ve been able to work out if such-and-such an automated system is an improvement on its predecessors, that system may have caused too many rapid irreversible changes.

Worse, there could be a hard-to-estimate “critical mass” effect.  Rapidly accumulating intelligent automation is potentially akin to accumulating nuclear material until it unexpectedly reaches an irreversible critical mass.  The resulting “super cloud” system will presumably state very convincing arguments to us, for why such and such changes in regulations make great sense.  The result could be outstandingly good – but equally, it could be outstandingly bad.

Moreover, it’s likely to prove very hard to “switch off the Internet” (or “switch off Google”).  We’ll be so dependent on the Internet that we’ll be unable to disconnect it, even though we recognise there are bad consequences,

If all of this happens in slow motion, we would be OK.  We’d be able to review it and debug it in real time.  However, the lessons from the recent economic crisis is that these changes can take place almost too quickly for human governments to intervene.  That’s why we need to ensure, ahead of time, that we have a good understanding of what’s happeningAnd that’s why there should be lots more discussions of the sort that took place at Future Human last night.

The final question from the floor raised a great point: why isn’t this whole subject receiving prominence in the current UK general election debates?  My answer: It’s down to those of us who do see the coming problems to ensure that the issues get escalated appropriately.

Footnote: Regular readers will not be surprised if I point out, at this stage, that many of these same topics will be covered in the Humanity+ UK2010 event happening in Conway Hall, Holborn, London, on Saturday 24 April.  The panellists at the Future Human event were good, but I believe that the H+UK speakers will be even better!

8 April 2010

Video: The case for Artificial General Intelligence

Filed under: AGI, flight, Humanity Plus, Moore's Law, presentation, YouTube — David Wood @ 11:19 am

Here’s another short (<10 minute) video from me, building on one of the topics I’ve listed in the Humanity+ Agenda: the case for artificial general intelligence (AGI).

The discipline of having to fit a set of thoughts into a ten minute video is a good one!

Further reading: I’ve covered some of the same topics, in more depth, in previous blogposts, including:

For anyone who prefers to read the material as text, I append an approximate transcript.

My name is David Wood.  I’m going to cover some reasons for paying more attention to Artificial General Intelligence, AGI, – also known as super-human machine intelligence.  This field deserves significantly more analysis, resourcing, and funding, over the coming decade.

Machines with super-human levels of general intelligence will include hardware and software, as part of a network of connected intelligence.  Their task will be to analyse huge amounts of data, review hypotheses about this data, discern patterns, propose new hypotheses, propose experiments which will provide valuable new data, and in this way, recommend actions to solve problems or take advantage of opportunities.

If that sounds too general, I’ll have some specific examples in a moment, but the point is to create a reasoning system that is, indeed, applicable to a wide range of problems.  That’s why it’s called Artificial General Intelligence.

In this way, these machines will provide a powerful supplement to existing human reasoning.

Here are some of the deep human problems that could benefit from the assistance of enormous silicon super-brains:

  • What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
  • What are the causes of different diseases – and how can we cure them?
  • Can we predict earthquakes– and even prevent them?
  • Are there safe geo-engineering methods that will head off the threat of global warming, without nasty side effects?
  • What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
  • Which existential risks – risks that could drastically impact human civilisation – deserve the most attention?

You get the idea.  I’m sure you could add some of your own favourite questions to this list.

Some people may say that this is an unrealistic vision.  So, in answer, let me spell out the factors I see as enabling this kind of super-intelligence within the next few decades.  First is the accelerating pace of improvements in computer hardware.

This chart is from University of London researcher Shane Legg.  On a log-axis, it shows the exponentially increasing power of super-computers, all the way from 1960 to the present day and beyond.  It shows FLOPS – the number of floating point operations per second that a computer can do.  It goes all the way from kiloflops through megaflops, gigaflops, teraflops, petaflops, and is pointing towards exaflops.  If this trend continues, we’ll soon have supercomputers with at least as much computational power as a human brain.  Perhaps within less than 20 years.

But will this trend continue?  Of course, there are often slowdowns in technological progress.  Skyscraper heights and the speeds of passenger airlines are two examples.  The slowdown can sometimes be for intrinsic technical difficulties, but is more often because of lack of sufficient customer interest or public interest in even bigger or faster products.  After all, the technical skills that took mankind to the moon in 1969 could have taken us to Mars long before now, if there had been sufficient continuing public interest.

Specifically, in the case of Moore’s Law for exponentially increasing hardware power, industry experts from companies like Intel state that they can foresee at least 10 more years’ continuation of this trend, and there have plenty of ideas for innovative techniques to extend it even further.  It comes down to two things:

  • Is there sufficient public motivation in continuing this work?
  • And can some associated system integration issues be solved?

Mention of system issues brings me back to the list of factors enabling major progress with super-intelligence.  Next is improvement with software.  There’s lots of scope here.  There’s also additional power from networking ever larger numbers of computer together.  Another factor is the ever-increasing number of people with engineering skills, around the world, who are able to contribute to this area.  We have more and more graduates in relevant topics all the time.  Provided they can work together constructively, the rate of progress should increase.  We can also learn more about the structure of intelligence by analysing biological brains at ever finer levels of detail – by scanning and model-building.  Last, but not least, we have the question of motivation.

As an example of the difference that a big surge in motivation can make, consider the example of progress with another grand, historical engineering challenge – powered flight.

This example comes from Artificial Intelligence researcher J Storr Halls in his book “Beyond AI”.  People who had ideas about powered flight were, for centuries, regarded as cranks and madmen – a bit like people who, in our present day, have ideas about superhuman machine intelligence.  Finally, after many false starts, the Wright brothers made the necessary engineering breakthroughs at the start of the last century.  But even after they first flew, the field of aircraft engineering remained a sleepy backwater for five more years, while the Wright brothers kept quiet about their work and secured patent protection.  They did some sensational public demos in 1908, in Paris and in America.  Overnight, aviation went from a screwball hobby to the rage of the age and kept that status for decades.  Huge public interest drove remarkable developments.  It will be the same with demonstrated breakthroughs with artificial general intelligence.

Indeed, the motivation for studying artificial intelligence is growing all the time.  In addition to the deep human problems I mentioned earlier, we have a range of commercially-significant motivations that will drive business interest in this area.  This includes ongoing improvements in search, language translation, intelligent user interfaces, games design, and spam detection systems – where there’s already a rapid “arms race” between writers of ever more intelligent “bots” and people who seek to detect and neutralise these bots.

AGI is also commercially important to reduce costs from support call systems, and to make robots more appealing in a wide variety of contexts.  Some people will be motivated to study AGI for more philosophical reasons, such as to research ideas about minds and consciousness, to explore the possibility of uploading human consciousness into computer systems, and for the sheer joy of creating new life forms.  Last, there’s also the powerful driver that if you think a competitor may be near to a breakthrough in this area, you’re more likely to redouble your efforts.  That adds up to a lot of motivation.

To put this on a diagram:

  • We have increasing awareness of human-level reasons for developing AGI.
  • We also have maturing sub-components for AGI, including improved algorithms, improved models of the mind, and improved hardware.
  • With the Internet and open collaboration, we have an improved support infrastructure for AGI research.
  • Then, as mentioned before, we have powerful commercial motivations.
  • Adding everything up, we should see more and more people working in this space.
  • And it should see rapid progress in the coming decade.

An increased focus on Artificial General Intelligence is part of what I’m calling the Humanity+ Agenda.  This is a set of 20 inter-linked priority areas for the next decade, spread over five themes: Health+, Education+, Technology+, Society+, and Humanity+.  Progress in the various areas should reinforce and support progress in other areas.

I’ve listed Artificial General Intelligence as part of the project to substantially improve our ability to reason and learn: Education+.  One factor that strongly feeds into AGI is improvements with ICT – including improvements in both ongoing hardware and software.  If you’re not sure what to study or which field to work in, ICT should be high on your list of fields to consider.  You can also consider the broader topic of helping to publicise information about accelerating technology – so that more and more people become aware of the associated opportunities, risks, context, and options.  To be clear, there are risks as well as opportunities in all these areas.  Artificial General Intelligence could have huge downsides as well as huge upsides, if not managed wisely.  But that’s a topic for another day.

In the meantime, I eagerly look forward to working with AGIs to help address all of the top priorities listed as part of the Humanity+ Agenda.

5 April 2010

The ascent of money: huge opportunities and huge risks

Filed under: books, Economics, predictability — David Wood @ 9:36 pm

The turning point of the American Civil War.  The defeat of Napoleon.  The lead-up to the French Revolution.  The decline of Imperial Spain.  These chapters of history all have intriguing back stories – according to Harvard professor Niall Ferguson, in his book “The Ascent of Money: A Financial History of the World“.

The back stories, each time, refer to the strengths and weakness of evolving financial systems.

Appreciating these back stories isn’t just an intellectual curiosity.  It provides rich context for the view that financial systems are sophisticated and complex entities that deserve much wider understanding.  Without this understanding, it’s all too easy for people to hold one or other overly-simplistic understanding of financial systems, such as:

  • Financial systems are all fundamentally flawed;
  • Financial systems are all fundamentally beneficial;
  • There are “sure thing” investments which people can learn about;
  • Financial systems should be picked apart – the world would be better off without them;
  • Markets are inherently insane;
  • Markets are inherently sane;
  • Bankers (and their ilk) deserve our scorn;
  • Bankers (and their ilk) deserve our deep gratitude.

As the book progresses, Ferguson sweeps forwards and backwards throughout history, gradually building up a fuller picture of evolving financial systems:

  • The banking system;
  • Government bonds;
  • Stock markets;
  • Insurance and securities;
  • The housing market;
  • Hedge funds;
  • Globalisation;
  • The growing role of China in financial systems.

Like me, Ferguson was born in Scotland.  I was struck by the number of Scots-born heroes and villains the book introduces, including an infamous Glaswegian loan shark, the creators of the first true insurance company, officers of the companies involved in the Anglo-China “Opium Wars”, and Andrew Law – instigator in France of one of history’s first great stock market bubbles.  Of course, many non-Scots have starring roles too – including Shakespeare’s Shylock, the Medicis, the Rothschilds, George Soros, the managers of Enron, Milton Friedman, and John Maynard Keynes.

Time and again, Ferguson highlights lessons for the present day.  Yes, new financial systems can liberate great amounts of creativity.  Innovation in financial systems can provide significant benefits for society.  But, at the same time, financial systems can be mis-managed, with dreadful consequences.  One major contributory cause of mis-managing these systems is when people lack a proper historical perspective – for example, when the experience of leading financiers is just of times of growth, rather than times of savage decline.

Among many fascinating episodes covered in the book, I found two to be particularly chilling:

  • The astonishing (in retrospect) over-confidence of observers in the period leading up to the First World War, that any such war could not possibly happen;
  • The astonishing (in retrospect) over-confidence of the managers of the Long Term Capital Management (LTCM) hedge fund, that their fund could not possibly fail.

Veteran journalist Hamish McRae describes some of the pre-WWI thinking in his review of Ferguson’s book in The Independent:

The 19th-century globalisation ended with the catastrophe of the First World War. It is really scary to realise how unaware people were of the fragility of those times. In 1910, the British journalist Norman Angell published The Great Illusion, in which he argued that war between the great powers had become an economic impossibility because of “the delicate interdependence of international finance”.

In spring 1914 an international commission reported on the Balkan Wars of 1912-13. The British member of the commission, Henry Noel Brailsford, wrote: “In Europe the epoch of conquest is over and save in the Balkans perhaps on the fringes of the Austrian and Russian empires, it is as certain as anything in politics that the frontiers of our national states are finally drawn. My own belief is that there will be no more war among the six powers.”

And Ferguson re-tells the story of LTCM in his online article “Wall Street Lays Another Egg” (which also covers many of the other themes from his book):

…how exactly do you price a derivative? What precisely is an option worth? The answers to those questions required a revolution in financial theory. From an academic point of view, what this revolution achieved was highly impressive. But the events of the 1990s, as the rise of quantitative finance replaced preppies with quants (quantitative analysts) all along Wall Street, revealed a new truth: those whom the gods want to destroy they first teach math.

Working closely with Fischer Black, of the consulting firm Arthur D. Little, M.I.T.’s Myron Scholes invented a groundbreaking new theory of pricing options, to which his colleague Robert Merton also contributed. (Scholes and Merton would share the 1997 Nobel Prize in economics.) They reasoned that a call option’s value depended on six variables: the current market price of the stock (S), the agreed future price at which the stock could be bought (L), the time until the expiration date of the option (t), the risk-free rate of return in the economy as a whole (r), the probability that the option will be exercised (N), and—the crucial variable—the expected volatility of the stock, i.e., the likely fluctuations of its price between the time of purchase and the expiration date (s). With wonderful mathematical wizardry, the quants reduced the price of a call option to this formula (the Black-Scholes formula).

Feeling a bit baffled? Can’t follow the algebra? That was just fine by the quants. To make money from this magic formula, they needed markets to be full of people who didn’t have a clue about how to price options but relied instead on their (seldom accurate) gut instincts. They also needed a great deal of computing power, a force which had been transforming the financial markets since the early 1980s. Their final requirement was a partner with some market savvy in order to make the leap from the faculty club to the trading floor. Black, who would soon be struck down by cancer, could not be that partner. But John Meriwether could. The former head of the bond-arbitrage group at Salomon Brothers, Meriwether had made his first fortune in the wake of the S&L meltdown of the late 1980s. The hedge fund he created with Scholes and Merton in 1994 was called Long-Term Capital Management.

In its brief, four-year life, Long-Term was the brightest star in the hedge-fund firmament, generating mind-blowing returns for its elite club of investors and even more money for its founders. Needless to say, the firm did more than just trade options, though selling puts on the stock market became such a big part of its business that it was nicknamed “the central bank of volatility” by banks buying insurance against a big stock-market sell-off. In fact, the partners were simultaneously pursuing multiple trading strategies, about 100 of them, with a total of 7,600 positions. This conformed to a second key rule of the new mathematical finance: the virtue of diversification, a principle that had been formalized by Harry M. Markowitz, of the Rand Corporation. Diversification was all about having a multitude of uncorrelated positions. One might go wrong, or even two. But thousands just could not go wrong simultaneously.

The mathematics were reassuring. According to the firm’s “Value at Risk” models, it would take a 10-s (in other words, 10-standard-deviation) event to cause the firm to lose all its capital in a single year. But the probability of such an event, according to the quants, was 1 in 10^24—or effectively zero. Indeed, the models said the most Long-Term was likely to lose in a single day was $45 million. For that reason, the partners felt no compunction about leveraging their trades. At the end of August 1997, the fund’s capital was $6.7 billion, but the debt-financed assets on its balance sheet amounted to $126 billion, a ratio of assets to capital of 19 to 1.

There is no need to rehearse here the story of Long-Term’s downfall, which was precipitated by a Russian debt default. Suffice it to say that on Friday, August 21, 1998, the firm lost $550 million—15 percent of its entire capital, and vastly more than its mathematical models had said was possible. The key point is to appreciate why the quants were so wrong.

The problem lay with the assumptions that underlie so much of mathematical finance. In order to construct their models, the quants had to postulate a planet where the inhabitants were omniscient and perfectly rational; where they instantly absorbed all new information and used it to maximize profits; where they never stopped trading; where markets were continuous, frictionless, and completely liquid. Financial markets on this planet followed a “random walk,” meaning that each day’s prices were quite unrelated to the previous day’s, but reflected no more and no less than all the relevant information currently available. The returns on this planet’s stock market were normally distributed along the bell curve, with most years clustered closely around the mean, and two-thirds of them within one standard deviation of the mean. On such a planet, a “six standard deviation” sell-off would be about as common as a person shorter than one foot in our world. It would happen only once in four million years of trading.

But Long-Term was not located on Planet Finance. It was based in Greenwich, Connecticut, on Planet Earth, a place inhabited by emotional human beings, always capable of flipping suddenly and en masse from greed to fear. In the case of Long-Term, the herding problem was acute, because many other firms had begun trying to copy Long-Term’s strategies in the hope of replicating its stellar performance. When things began to go wrong, there was a truly bovine stampede for the exits. The result was a massive, synchronized downturn in virtually all asset markets. Diversification was no defense in such a crisis. As one leading London hedge-fund manager later put it to Meriwether, “John, you were the correlation.”

There was, however, another reason why Long-Term failed. The quants’ Value at Risk models had implied that the loss the firm suffered in August 1998 was so unlikely that it ought never to have happened in the entire life of the universe. But that was because the models were working with just five years of data. If they had gone back even 11 years, they would have captured the 1987 stock-market crash. If they had gone back 80 years they would have captured the last great Russian default, after the 1917 revolution. Meriwether himself, born in 1947, ruefully observed, “If I had lived through the Depression, I would have been in a better position to understand events.” To put it bluntly, the Nobel Prize winners knew plenty of mathematics but not enough history.

These episodes should remind us of the fragility of our current situation.  Indeed, as one of many potential future scenarios, Ferguson candidly discusses the prospects for a serious breakdown in relations between China and the west, akin to the breakdown of relations that precipitated the First World War.

In summary: I recommend this book, not only because it is full of intriguing anecdotes, but because it will help to raise awareness of the complex impacts of financial systems.  It will help boost general literacy about all aspects of money – and should, therefore, help us to be more effective in how collectively manage financial innovation.

Note: There are two editions of this book: one released in 2008, and one released in 2009.  The latter has a fuller account of the recent global financial crisis, and for that reason, is the better one to read.

31 March 2010

Shorter and sharper: improved video on priorities

Filed under: communications, futurist, Humanity Plus, presentation, YouTube — David Wood @ 1:06 pm

The above video provides context for the Humanity+ UK2010 event happening on 24th April.

It’s the second version of this video.  In the spirit of continuous improvement, this version:

  • Has better audio (I found out how to get my laptop to accept input from a jack mic);
  • Is shorter (it needs to be under 10 minutes in length to be accepted onto YouTube);
  • Has some improved layout and logic.

As a video, it’s still far from perfect!  As you can see, my video creation skills are still rudimentary.  But hopefully people will find the contents interesting.

It’s probably foolhardy of me to try to cover so much material in just 10 minutes.  I’m considering creating a short book on this topic, in order to do fuller justice to these ideas.

Video transcript

In case anyone would prefer a written version of what I said, I append a transcript.  Everyone else can stop reading now.

(Note: this transcript doesn’t match the video exactly, since I ad-libbed here and there.)

My name is David Wood.  I’m going to briefly describe the Humanity+ UK2010 event that will be taking place in London on Saturday 24th April.

As context, let me outline what I’m calling “The Humanity+ Agenda”:

  • This is a proposed set of 20 priorities – 20 items that in my view deserve significantly more attention, analysis, resourcing, and funding, over the coming decade.
  • These priorities are proposed responses to an interlinked set of major challenges that confront society.

The first of these challenges is the threat of environmental catastrophe – lack of clean, sustainable energy and other critical resources.  Second is the threat of economic collapse.  We’re still in the midst of the most serious economic crisis of the last 60 years.  Third is the risk of some fundamentalist terrorists getting their hands on fearsome weapons of mass destruction.  Fourth is a more subtle point: the growing sense of alienation and discontent as individuals all over the world increasingly realise that their own share of possible peak experiences is very limited and transitory.  All this adds up to a radically uncertain future, made all the more challenging due to the need to drastically cut back activities to pay for the ongoing economic crisis.

The single thing that will make the biggest difference to whether we overcome these deep challenges is technology.  Accelerating technology can supply many far-reaching solutions.  But technology cannot stand alone.  Improved technology depends on improved education and improved rationality.  The relationship goes both ways.  There’s another two-way relation with improved health and improved vitality.  Likewise for improved social structure; and for the full expression of human potential.

The 20 priorities fall into these five themes.  These are five areas where there’s already a lot of expenditure – from both government and industry.  But we have to raise our game in each of these areas.  We need to become smarter and more effective in each area.  Rather than “health” I’d like to talk about “super health”, or “health plus”.  Similarly, we need substantially improved education and reasoning ability, substantially improved technology, and substantially improved social structure.  All this will take human experience and capability to a significantly higher level – “Humanity plus”.

So let’s start listing the 20 priorities.  You’ll notice many interconnections.

In the field of Health+, we need to accelerate the progress of preventive medicine.  Fixing medical problems at early stages can be a much more cost effective way of spending a limited health budget.  Healthy individuals contribute to society more, rather than being a drain on its resources.  Going further, the slogan “better than well” should also become a priority.  People with exceptional levels of fitness, strength, perseverance, and vitality, can contribute even more to society.

Anti-aging treatments are an important special case of the previous priorities.  Many diseases are exacerbated because our bodies have accumulated different kinds of damage over the years – which we call “aging”.  Systematically removing or repairing this damage will have many benefits.

Education+ refers to people improving their skillsets and reasoning ability, all throughout their lives.  Behavioural psychology is pointing out many kinds of irrational bias in how all of us reach decisions.  We all need help in identifying and overcoming these biases.

One example is the undue influence that fundamentalist thinking can hold over people – when dogma from “scripture” or “tradition” or a “prophet” overrides the conclusions of rational debate.  The world is, today, too dangerous a place to allow dogma-driven people to hold positions of great power.

An important part of freeing people from limited thinking is to boost education about the status of accelerating technology – covering the opportunities, risks, context, and options.

Another way we can become smarter – and more sociable – is via cognitive enhancement and intelligence augmentation.  This includes drugs that improve our thinking and/or our mood, and silicon accompaniments to our biological brains.  Being connected to the Internet, via the likes of Google and Wikipedia, already boosts our knowledge significantly.

Before long, we could have at our fingertips access to Artificial General Intelligence, whereby computers can provide first class answers to tough questions that previously eluded even the smartest teams of people.  For example, I expect that many cures for diseases will be developed in collaboration with increasingly intelligent silicon super-brains.

That takes us to Technology+, the set of technologies underpinning the other changes I am describing.  Improved robots could provide unmatched precision and manual dexterity, as well as great diligence and power.

Nanotechnology could enable the creation of highly useful new materials, compounds, and tools.  Synthetic biology, in turn, could apply techniques from manufacturing and software to create new biological forms, with huge benefits for health, food, energy, and more.  Research into large-scale clean energy could finally solve our energy sustainability issues.  And underpinning all these technologies should be new generations of ICT – information and communications technologies, especially improvements in software.

But technology requires support from society in order to advance quickly and wisely.  Under the heading “Society+” I identify four priority areas: patent system reform, smart market regulation, the expansion of the domain of collaborative voluntary enterprise, and vibrant democratic involvement and oversight, which enables an inclusive open discussion on the best way to manage the future.

Finally, under the heading “Humanity+” we have three priorities: expansion of human choice and autonomy, developing new ways of measuring human accomplishment – that avoid the well-known drawbacks of purely economic measurements – and “geo-engineering capability”.  I’m reminded of the recent statement by veteran ecologist Stewart Brand: “We are as gods, and HAVE to get good at it”.  It’s a frightening responsibility, but there is no alternative.

In summary, 20 interlinked priority areas in five themes: health+, education+, technology+, society+, and humanity+.  In each case, we must reach new levels of achievement.  Happily, we have in our hands the means to do so.  But let’s not imagine that things will be easy.  The next 10-20 years will probably be the most critical in the history of humanity.

In the midst of great difficulties, we’ll no doubt be sorely tempted by six dangerous distractions.

First is the idea that human progress is somehow inevitable, as if governed by some kind of cosmic law.  Alas, I see nothing pre-determined.  We need to become activists, rather than passive bystanders.

Second is the idea that the free market economy, if set up properly and then left to its own devices, will automatically generate the kinds of improvement in technology and product that I am talking about.  Sorry, although markets have been a powerful force for development over history, they’re far from perfect.

Nature – and evolution by natural selection – is another force which has accomplished a great deal, but which is far from optimal.  Nature is full of horrors as well as beauty.  Humans have been augmenting nature with enhancements from technology from before the beginning of recorded history.  This process absolutely needs to continue.

Risk aversion is another dangerous temptation.  Yet if we do nothing, we’re going to be in significant trouble anyway.  Either way, we can’t avoid risk – we just have to become better at evaluating it and managing it.

Next on this list is religion – any view that all the important answers have already been revealed.  I see religion as akin to several of the other temptations on this list: it has achieved a great deal in the past, but is far from being the sole guide to what we must do next.

Last on this list is humanism – the idea that humans, with our present set of attributes and skills, will be sufficient to build the best possible future environment.  However, present-day humans are no more the end point of progress than were simians – monkeys – or mammals.  In my view, it is only the significantly enhanced humans of the near future who will, collectively, be able to guide society and civilisation to reach our true potential.

We can succeed by progress, not by standing still.  We can succeed by transcending nature with enhanced technology, and by restructuring society in ways more favourable to innovation, collaboration, choice, and participation.

If these ideas strike you as interesting, one way you can continue the discussion is at the Humanity+ UK2010 event, on the 24th of April.  This will be held in Conway Hall, in Holborn, London.  You can register for the event at the website humanityplus dash uk dot com.  There will be 10 speakers, including many of the pioneering thinkers of the modern transhumanist or Humanity+ movement.

  • In the morning, the key speakers are Max More, Anders Sandberg, and Rachel Armstrong.
  • After lunch, the speakers will be Aubrey de Grey, David Pearce, and Amon Twyman.
  • Later in the afternoon, we’ll hear from Natasha Vita-More, David Orban, and Nick Bostrom.

You can find more details on the conference website.  If you’re quick, you may also be able to book one of the few remaining places at the post-event dinner, where all the speakers will be attending.  I hope to see you there.

I look forward to continuing this important discussion!

28 March 2010

A video experiment: 20 priorities

Filed under: communications, futurist, Humanity Plus, presentation, UKH+ — David Wood @ 9:38 am


Video: 20 priorities for the coming decade

The video linked above is my attempt to address several different requirements:

  1. To follow up some ideas about the list of priorities I mentioned previously, tentatively named “The Humanity+ Agenda”;
  2. To find an interesting new way to help publicise the forthcoming (April 24th) “Humanity+ UK2010” event;
  3. To experiment with creating videos, to use for communications purposes, as a complement to textual blog posts.

As you can see, it’s based on Powerpoint – a tool I know well.

What I didn’t appreciate about Powerpoint, before, is the fact that you can embed an audio narrative, to playback automatically as the slides and animations progress.  So that’s what I decided to do.

First time round, I tried to ad lib remarks, as I progressed through the slides, but that didn’t work well.  Next, I wrote down an entire script, and read from that.  The result is a bit flat and jaded in places, and there are a few too many verbal fluffs for my liking.  When I try this again, I’ll set aside more time, and make myself re-do the narration for a slide each time I fluff a few words.

I also hit some bugs (and quirks) when using the “Record narration” features of PowerPoint.  Some of these seem to be known features, but not all:

  • A few seconds of the narration often gets truncated from the end of each slide.  The workaround is to wait three seconds after finishing speaking, before advancing to the next slide;
  • The audio quality for the first slide was very crackly every time, not matter what I tried.  The workaround is to insert an extra “dummy” slide at the beginning, and to discard that slide before publishing;
  • There’s a pair of loud audible cracks at the start of each slide.  I don’t know any workaround for that;
  • Some of the timing, during playback, is slightly out of synch with what I recorded: animations on screen sometimes happen a few seconds before the accompanying audio stream is ready for them.

I used authorSTREAM as the site to store the presentation.  They offer the following features:

  • Support for playback of presentations containing audio narration;
  • Support for converting the presentation into video format.

The authorSTREAM service looks promising – I expect to use it again!

Footnote: I’ll update this posting shortly, with a copy of the video embedded, rather than linked.  (I still find video embedding to be a bit of a hit-or-miss process…)

« Newer PostsOlder Posts »

Blog at WordPress.com.