dw2

2 December 2009

The next IT industry: Synthetic biology

Filed under: risks, Singularity University, Synthetic biology — David Wood @ 2:30 am

Synthetic biology will be “the next IT industry”, and will even be “more important than the last one”.  These are two of the claims in the extraordinary video from the Singularity University featuring Andrew Hessel.

The video lasts nearly one hour, and is full of thought-provoking material.  The subtitle of the video is “hacking genomes”.

Here are just a few of the highlights and topics I noted while watching it:

  • Cells inside organisms are in many ways akin to computers inside networks
  • People with engineering backgrounds are bringing engineering ideas into biology
  • push-button biology: “dream is to design … press a button, and have the design translated to DNA sequences that can be synthesised and put to work in living cells”
  • “DNA printers” will become better and better
  • iGEM: international genetically engineered machines
  • DIYbio: “an organization dedicated to making biology an accessible pursuit for citizen scientists, amateur biologists, and DIY biological engineers”
  • Developing a genetic programming language
  • Creating the conditions for the emergence of a new generation of “computing whiz kids” – the synthetic biotech equivalents of Steve Wozniak, Steve Jobs, Paul Allen, and Bill Gates
  • “We’ll soon see molecular biological labs on iPhones”
  • Cost decrease curve for DNA synthesis (“writing DNA”) is tracking that for DNA sequencing (“reading DNA”), lagging it by around 8 years
  • “The human genome synthesis project is coming”

This is the same field where Craig Venter (famous from the first human genome project) is now working.  To quote from the website of his company, Synthetic Genomics:

The Global Challenge: Sustainably meeting the increasing demand for critical resources

The world is facing increasingly difficult challenges today. Population growth resulting in the growing demand for critical resources such as energy, clean water, food and medicine are taxing our fragile planet. To fulfill these needs we need disruptive technologies. We believe genomic advances offer the world viable, sustainable alternatives.

At Synthetic Genomics Inc. we are creating genomic-driven commercial solutions to revolutionize many industries. We have started by focusing on energy, but we imagine a future where our science could be used to produce a variety of products, from synthetically derived vaccines to prevent human diseases to efficient cost effective ways to create clean drinking water. The world is dependent on science and we’re leading the way in turning novel science into life-changing solutions.

Three possible reactions to the idea of synthetic biology

One reaction to the idea of synthetic biology is to say, “Wow – I’d love to become involved!”

A second reaction is to point out the potential huge risks if the process creates dangerous new life forms, such as a fast-spreading new virus.  One of the audience members in the video lecture asked about this; I wasn’t fully convinced by the answer Andrew Hessel gave.

A third reaction is to say that it’s very unlikely that we will, in fact, be able to improve on nature.  This is similar to a comment made by Mark Wilcox in response to my previous blogpost, “The single biggest problem”.  I wrote that:

rather than seeing “natural” as somehow akin to “the best imaginable”, we must be prepared to engineer solutions that are “better than natural”

Mark replied:

I actually find it rather arrogant given millions of years of evolution and our relatively short spell of technological development that any of us presume to know what “better than natural” actually is

This last point in turn poses two questions:

  • Is the outcome of millions of years of evolution” the best outcome possible?
  • If not, is there any reliable way to try to do better than evolution?

For a discussion of the imperfect output of evolution, see (for example) my earlier blogpost, “The human mind as a flawed creation of nature“.

It’s also well worth reading the paper by Nick Bostrom and Anders Sandberg, “The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement” (PDF).  Here’s a copy of the abstract of that paper:

Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems, which are poorly understood, our interventions often fail or backfire.

It can appear as if there is a ‘‘wisdom of nature’’ which we ignore at our peril. Sometimes the belief in nature’s wisdom—and corresponding doubts about the prudence of tampering with nature, especially human nature—manifest as diffusely moral objections against enhancement. Such objections may be expressed as intuitions about the superiority of the natural or the troublesomeness of hubris, or as an evaluative bias in favor of the status quo. This chapter explores the extent to which such prudence-derived anti-enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in ‘‘nature knows best’’ attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature.

In conclusion, I personally see this emerging field as being full of tremendous promise, though I will seek to ensure that it is approached with great care and thoughtfulness (as well as excitement).

29 November 2009

The single biggest problem

Filed under: green, solar energy, UKH+, vision — David Wood @ 2:35 pm

Petra Söderling, my good friend and former colleague on the Symbian Foundation launch team, raises some important questions in a blogpost yesterday, Transhumans H+.  Petra remarked on the fact that I had included the text “UKH+ meetings secretary” on my new business card.  A TV program she watched recently had reminded her of the topic of transhumanism (often abbreviated to H+ or h+) – prompting her blogpost:

…I haven’t changed my mind, David. I still think this is not pressingly important or urgent. In my view, the single biggest problem we have at hand is that people are breeding like rabbits, and the planet cannot feed us all. Us rich westerners consume so much natural resources that just supporting our lifestyle would be a burden. But, we are not only idiots in our own consumption manners, we are idiots in showing the rest of the world that this is the preferred lifestyle. Our example leads to billions of people in developing and underdeveloped countries pursuing our way of living. This is done by unprecedented exploitation of resources everywhere.

We’re in a process of eating our home planet away, and helping the richest of us to live healthier and longer is no solution. What’s the point of living 150 years if you’re breathing manufactured air, all migrated to north and south poles from desert lands, and eating tomatos that are clone of a clone of a clone of a clone of a clone? As rich and clever as we are, I think we should solve first things first…

The mention of “first things first” and “single biggest problem” is music to my ears.  I’m currently engaged on a personal research program to try to clarify what, for me, should be the “first things” that deserve my own personal focus.  Having devoted the last 21 years of my work life to mobile software, particularly for smartphones, I’m now looking to determine where I should apply my skills and resources for the next phase of my professional life.

I completely agree with Petra that the current “western consumer lifestyle” is not sustainable.  As more and more people throughout the developing world adopt similar lifestyles, consuming more and more resources, the impact on our planet is becoming collosal.  It’s a very high priority to address this lack of sustainability.

But is the number of people on the planet – our population – the most important leverage point, to address this lack of sustainability?  There are at least four factors to consider:

  1. World population
  2. The resource consumption of the average person on the planet
  3. The outcome of processes for creating resources
  4. Side-effects of processes for creating resources.

Briefly, we are in big trouble if (1.)x(2.) exceeds (3.), and/or if the side-effects (4.) are problematic in their own right.

My view is that the biggest leverage will come from addressing factors (3.) and (4.), rather than (1.) and (2.).

For example, huge amounts of energy from the sun are hitting the earth the whole time.  To quote from chapter 25 of David MacKay’s first-class book “Sustainable energy without the hot air“,

…the correct statement about power from the Sahara is that today’s [global energy] consumption could be provided by a 1000 km by 1000 km square in the desert, completely filled with concentrating solar power. That’s four times the area of the UK. And if we are interested in living in an equitable world, we should presumably aim to supply more than today’s consumption. To supply every person in the world with an average European’s power consumption (125 kWh/d), the area required would be two 1000 km by 1000 km squares in the desert…

In parallel with thoughtfully investigating this kind massive-scale solar energy harvesting, it also makes sense to thoughtfully investigate massive-scale CO2 removal from the atmosphere (the topic of a blogpost I plan to write shortly) as well as other geo-engineering initiatives.  In line with the transhumanist philoosophy I espouse, I’m keen to

support and encourage the thoughtful development and application of technology to significantly enhance human mental and physical capabilities – with profound possible consequences on both personal and global scales

There are, of course, large challenges facing attempts to create massive-scale solar energy harvesting and massive-scale CO2 removal from the atmosphere.  These challenges span technology, politics, economics, and, dare I say it, philosophy.

In a previous posting, The trend beyond green, I”ve spelt out some desired changes in mindset that I see as required, on a global scale:

  • rather than decrying technology as “just a technical fix”, we must be willing to embrace the new resources and opportunities that these technologies make available;
  • rather than seeking to somehow reverse human lifestyle and aspiration to that of a “simpler” time, we must recognise and support the deep and valid interests in human enhancements;
  • rather than thinking of death and decay as something that gives meaning to life, we must recognise that life reaches its fullest meaning and value in the absence of these scourges;
  • rather than seeing the status quo as somehow the pinnacle of existence, we must recognise the deep drawbacks in current society and philosophies, and be prepared to move forwards;
  • rather than seeing “natural” as somehow akin to “the best imaginable”, we must be prepared to engineer solutions that are “better than natural”;
  • rather than seeking to limit expectations, with comments such as “this kind of enhancements might become possible in 100-200 years time”, we should recognise the profound possible synergies arising from the interplay of technologies that are individually accelerating and whose compound impact can be much larger.

Helping to accelerate these changes in mindset is one of the big challenges I’d like to adopt, in the next phase of my professional life.

Whatever course society adopts, to address our sustainability crisis, there will need to be some very substantial changes.  People embrace change much more willingly, if they see upside as well as downside in the change.  The H+ vision of the future I see is one of abundance (generated by the super-technology of the near future) along with societal harmony (peaceful coexistence) and ample opportunities for new growth and exploration.

To return in closing to the question raised earlier: what is the “single biggest problem” that most deserves our collective attention?  Is it population growth and demographics, global warming, shortage of energy, the critical instability of the world economic order, the potential for a new global pandemic, nuclear terrorism, or some other global existential risk?

In a way, the answer is “none of the above”.  Rather, the single biggest problem is that, globally, we are unable to collaborate sufficiently deeply and productively to develop and deploy solutions to the above issues.  This is a second-level problem.  The economic, political, and philosophical structures we have inherited from the past have very many positive aspects, but many drawbacks as well – drawbacks that are becoming ever more pressing as we see accelerating change in technology, resource usage, and communications.

26 November 2009

Forthcoming speaking engagements

Filed under: Bangalore, Cambridge, developer experience, openness, presentation — David Wood @ 8:50 pm

Cambridge Wireless, 3rd December

Next Thursday, 3rd December, I’ll be participating in a meeting of the Software & Open Source SIG (Special Interest Group) of Cambridge Wireless.  Cambridge Wireless is a community of companies in and around Cambridge with the following declared ambition:

Our objective is to establish Cambridge Wireless as the leading wireless community in the world and where we are at the leading edge of thought, leadership and wireless technology discussions

Symbian Software Ltd was one of the founding members of Cambridge Wireless, and the Symbian Software Ltd office on the outskirts of Cambridge hosted several Cambridge Wireless events over the years.  The ones I attended provided excellent networking and stimulating conversation.

Cambridge Wireless organise a series of SIGs – such as the one on Software & Open Source.  Next Thursday, this SIG will be gathering to review a set of presentations addressing the topic,

Open handset ecosystems – can they deliver handsets that consumers want?

My own presentation at this event has the title,

Open Ecosystems – a Good Thing?

Here’s the abstract for my presentation:

David will look at the various ways in which openness is changing the way that handsets are being developed through the use of open ecosystems, how developer ecosystems are transforming the way that applications and services are being created, and give his view on the impact of this on consumers. Openness brings challenges as well as triumphs. Done right, however, the open community approach will (over time) generate better solutions than any system of tight control – and consumers will reap the benefits.

The other speakers at the event will be:

  • Alberto Bonamico of Symsource – talking on “Adapting to the Open Source Ecosystem”
  • Andrew Savory of LiMo Foundation – talking on “Open Apps – Good, Bad or Ugly?”

There will be a panel discussion after the talks, with plenty of opportunity for Q&A from the floor.

Registration for this event is still open, via the Cambridge Wireless website.

Special thanks are due to Peter Montgomery of Ogma Solutions and Phillip Burr of Octymo, the champions of this SIG.

Forum Nokia Developer Conference, Bangalore, 7th December

The following Monday, 7th December, I’ll have the privilege to join another illustrious panel of speakers, at the Forum Nokia Developer ’09 conference in Bangalore, India.  This conference has the theme,

Unlock Star within you

(Anyone familiar with the UI on Nokia phones will appreciate the double significance of this name.)

As stated on the event website:

Unlock possibilities!

A world of infinite possibilities is waiting for you. It opens up when you press the Unlock Star keys of a mobile phone!

Discover the amazing new ways in which Mobiles are simplifying life, helping people to connect, communicate and access information at Forum Nokia Developer Conference ’09, the biggest forum in India for mobile application developers. See how everything anyone can imagine can be possible, at the touch of a fingertip and from the top of the palm.

Unlock Learning!

Get the right inputs, insights and network at Forum Nokia Developer Conference ’09. Access all the resources you will ever need to turn your ideas into reality. Get insights from industry leaders. Learn tricks and tips to create applications, faster. Explore the value OVI store has to offer to you as a developer. And more…

You and your mobile application is all it takes!

Your innovation can play a key role in shaping the future and enabling people. Create it to run on hundreds of millions of mobile phones and transform yourself into a global star.

Keynote speakers at this event include:

D. Shivakumar, Managing Director, Nokia Mobile phones and Vice President, Nokia

Purnima Kochikar, Vice President, Forum Nokia and Developer Community

On this occasion, I’ll be speaking on the topic

Winning habits of star developers

People interested to attend can register via the conference website.

The secrets of consulting

Filed under: books, consulting — David Wood @ 12:38 pm

One thing I’m likely to want to do in the weeks and months ahead is to earn some income via consulting (perhaps on an interim basis).  I’ve therefore updated my own (still rudimentary) “business” website, http://deltawisdom.com, to mention that I can “provide high-value facilitation, consultancy, and presentations”.

Responding to this, my good friend and long-term Symbian colleague, John Pagonis of Pragmaticomm, sent me a short piece of advice:

may I suggest you study the “Secrets of Consulting” by G. M. Weinberg again if you haven’t done this already

I took John’s advice and have just finished reading the book – full title is “The secrets of consulting: a guide to giving and getting advice successfully“.

It contains a lot of interesting and useful ideas for an aspiring consultant, expressed with good humour, and memorably summed up in pithily-stated laws.

Here are just a few examples:

You can make buffalo go anywhere just so long as they want to go there

Trust takes years to win, moments to lose

The trick of earning trust is to avoid all tricks

Nobody but you cares about the reason you let them down

Spend at least one fourth of your time doing nothing

Pricing has many functions, only one of which is the exchange of money

In spite of what your client says, there’s always a problem

No matter how it looks at first, it’s always a people problem

Clients always know how to solve their problems, and always tell the solution in the first five minutes

Consultants should not care who gets the credit… When an effective consultant is present, the client solves problems

(This is just a small fraction of the laws stated – and explained – in the book.)

I think I already had the same views as what the author was explaining, so I didn’t get any blinding “aha” insight from it.  However, the laws are very handy reminders.  Indeed, Weinberg states a law about that too:

What you don’t know may not hurt you, but what you don’t remember always does

For me, the chapters “Marketing yourself” and “Putting a price on your head” were probably the most useful 🙂

22 November 2009

Timescales for Human Body Version 2.0

Filed under: aging, Kurzweil, nanotechnology — David Wood @ 7:21 pm

In the coming decades, a radical upgrading of our body’s physical and mental systems, already underway, will use nanobots to augment and ultimately replace our organs. We already know how to prevent most degenerative disease through nutrition and supplementation; this will be a bridge to the emerging biotechnology revolution, which in turn will be a bridge to the nanotechnology revolution. By 2030, reverse-engineering of the human brain will have been completed and nonbiological intelligence will merge with our biological brains.

The paragraph above is the abstract for the chapter by Ray Kurzweil in the book “The Scientific Conquest of Death“.  In that chapter, Ray sets out a vision for a route to indefinite human lifespans.

Here are a few highlights from the essay:

It’s All About Nanobots

In a famous scene from the movie, The Graduate, Benjamin’s mentor gives him career advice in a single word: “plastics.”  Today, that word might be “software,” or “biotechnology,” but in another couple of decades, the word is likely to be “nanobots.”  Nanobots—blood-cell-sized robots—will provide the means to radically redesign our digestive systems, and, incidentally, just about everything else.

In an intermediate phase, nanobots in the digestive tract and bloodstream will intelligently extract the precise nutrients we need, call for needed additional nutrients and supplements through our personal wireless local area network, and send the rest of the food we eat on its way to be passed through for elimination.

If this seems futuristic, keep in mind that intelligent machines are already making their way into our blood stream.  There are dozens of projects underway to create blood-stream-based “biological microelectromechanical systems” (bioMEMS) with a wide range of diagnostic and therapeutic applications.  BioMEMS devices are being designed to intelligently scout out pathogens and deliver medications in very precise ways…

A key question in designing this technology will be the means by which these nanobots make their way in and out of the body.  As I mentioned above, the technologies we have today, such as intravenous catheters, leave much to be desired.  A significant benefit of nanobot technology is that unlike mere drugs and nutritional supplements, nanobots have a measure of intelligence.  They can keep track of their own inventories, and intelligently slip in and out of our bodies in clever ways.  One scenario is that we would wear a special “nutrient garment” such as a belt or undershirt.  This garment would be loaded with nutrient bearing nanobots, which would make their way in and out of our bodies through the skin or other body cavities.

At this stage of technological development, we will be able to eat whatever we want, whatever gives us pleasure and gastronomic fulfillment, and thereby unreservedly explore the culinary arts for their tastes, textures, and aromas.  At the same time, we will provide an optimal flow of nutrients to our bloodstream, using a completely separate process.  One possibility would be that all the food we eat would pass through a digestive tract that is now disconnected from any possible absorption into the bloodstream.

Elimination

This would place a burden on our colon and bowel functions, so a more refined approach will dispense with the function of elimination.  We will be able to accomplish this using special elimination nanobots that act like tiny garbage compactors.  As the nutrient nanobots make their way from the nutrient garment into our bodies, the elimination nanobots will go the other way.  Periodically, we would replace the nutrition garment for a fresh one.  One might comment that we do obtain some pleasure from the elimination function, but I suspect that most people would be happy to do without it.

Ultimately we won’t need to bother with special garments or explicit nutritional resources.  Just as computation will eventually be ubiquitous and available everywhere, so too will basic metabolic nanobot resources be embedded everywhere in our environment.  In addition, an important aspect of this system will be maintaining ample reserves of all needed resources inside the body.  Our version 1.0 bodies do this to only a very limited extent, for example, storing a few minutes of oxygen in our blood, and a few days of caloric energy in glycogen and other reserves.  Version 2.0 will provide substantially greater reserves, enabling us to be separated from metabolic resources for greatly extended periods of time.

Once perfected, we will no longer need version 1.0 of our digestive system at all.  I pointed out above that our adoption of these technologies will be cautious and incremental, so we will not dispense with the old-fashioned digestive process when these technologies are first introduced.  Most of us will wait for digestive system version 2.1 or even 2.2 before being willing to do dispense with version 1.0.  After all, people didn’t throw away their typewriters when the first generation of word processors was introduced.  People held onto their vinyl record collections for many years after CDs came out (I still have mine).  People are still holding onto their film cameras, although the tide is rapidly turning in favor of digital cameras.

However, these new technologies do ultimately dominate, and few people today still own a typewriter.  The same phenomenon will happen with our reengineered bodies.  Once we’ve worked out the inevitable complications that will arise with a radically reengineered gastrointestinal system, we will begin to rely on it more and more.

Programmable Blood

As we reverse-engineer (learn the principles of operation of) our various bodily systems, we will be in a position to engineer new systems that provide dramatic improvements.  One pervasive system that has already been the subject of a comprehensive conceptual redesign is our blood…

I’ve personally watched (through a microscope) my own white blood cells surround and devour a pathogen, and I was struck with the remarkable sluggishness of this natural process.  Although replacing our blood with billions of nanorobotic devices will require a lengthy process of development, refinement, and regulatory approval, we already have the conceptual knowledge to engineer substantial improvements over the remarkable but very inefficient methods used in our biological bodies…

Have a Heart, or Not

The next organ on my hit list is the heart.  It’s a remarkable machine, but it has a number of severe problems.  It is subject to a myriad of failure modes, and represents a fundamental weakness in our potential longevity.  The heart usually breaks down long before the rest of the body, and often very prematurely.

Although artificial hearts are beginning to work, a more effective approach will be to get rid of the heart altogether.  Designs include nanorobotic blood cell replacements that provide their own mobility.  If the blood system moves with its own movement, the engineering issues of the extreme pressures required for centralized pumping can be eliminated.  As we perfect the means of transferring nanobots to and from the blood supply, we can also continuously replace the nanobots comprising our blood supply…

So What’s Left?

Let’s consider where we are.  We’ve eliminated the heart, lungs, red and white blood cells, platelets, pancreas, thyroid and all the hormone-producing organs, kidneys, bladder, liver, lower esophagus, stomach, small intestines, large intestines, and bowel.  What we have left at this point is the skeleton, skin, sex organs, mouth and upper esophagus, and brain…

Redesigning the Human Brain

The process of reverse engineering and redesign will also encompass the most important system in our bodies: the brain.  The brain is at least as complex as all the other organs put together, with approximately half of our genetic code devoted to its design.  It is a misconception to regard the brain as a single organ.  It is actually an intricate collection of information-processing organs, interconnected in an elaborate hierarchy, as is the accident of our evolutionary history.

The process of understanding the principles of operation of the human brain is already well under way.  The underlying technologies of brain scanning and neuron modeling are scaling up exponentially, as is our overall knowledge of human brain function.  We already have detailed mathematical models of a couple dozen of the several hundred regions that comprise the human brain.

The age of neural implants is also well under way.  We have brain implants based on “neuromorphic” modeling (i.e., reverse-engineering of the human brain and nervous system) for a rapidly growing list of brain regions.  A friend of mine who became deaf while an adult can now engage in telephone conversations again because of his cochlear implant, a device that interfaces directly with the auditory nervous system.  He plans to replace it with a new model with a thousand levels of frequency discrimination, which will enable him to hear music once again.  He laments that he has had the same melodies playing in his head for the past 15 years and is looking forward to hearing some new tunes.  A future generation of cochlear implants now on the drawing board will provide levels of frequency discrimination that go significantly beyond that of “normal” hearing…

And the essay continues.  It’s well worth reading in its entirety.  A short websearch finds a slightly longer version of the same essay online, on Kurzweil’s own website, along with a conceptual illustration by media artist and philosopher Natasha Vita-More:

Evaluating the vision: the questions

Three main questions arise in response to this vision of “Human Body Version 2.0”:

  1. Is the vision technologically feasible?
  2. Is the vision morally attractive?
  3. Within what timescales might the vision become feasible?

Progress: encouraging, but not rocket-paced

A recent article in the New Scientist, Medibots: The world’s smallest surgeons, takes up the theme of nanobots with medical usage, and reports on some specific progress:

It was the 1970s that saw the arrival of minimally invasive surgery – or keyhole surgery as it is also known. Instead of cutting open the body with large incisions, surgical tools are inserted through holes as small as 1 centimetre in diameter and controlled with external handles. Operations from stomach bypass to gall bladder removal are now done this way, reducing blood loss, pain and recovery time.

Combining keyhole surgery with the da Vinci system means the surgeon no longer handles the instruments directly, but via a computer console. This allows greater precision, as large hand gestures can be scaled down to small instrument movements, and any hand tremor is eliminated…

There are several ways that such robotic surgery may be further enhanced. Various articulated, snake-like tools are being developed to access hard-to-reach areas. One such device, the “i-Snake”, is controlled by a vision-tracking device worn over the surgeon’s eyes…

With further advances in miniaturisation, the opportunities grow for getting medical devices inside the body in novel ways. One miniature device that is already tried and tested is a camera in a capsule small enough to be swallowed…

The 20-millimetre-long HeartLander has front and rear foot-pads with suckers on the bottom, which allow it to inch along like a caterpillar. The surgeon watches the device with X-ray video or a magnetic tracker and controls it with a joystick. Alternatively, the device can navigate its own path to a spot chosen by the surgeon…

While the robot could in theory be used in other parts of the body, in its current incarnation it has to be introduced through a keyhole incision thanks to its size and because it trails wires to the external control box. Not so for smaller robots under wireless control.

One such device in development is 5 millimetres long and just 1 millimetre in diameter, with 16 vibrating legs. Early versions of the “ViRob” had on-board power, but the developers decided that made it too bulky. Now it is powered externally, by a nearby electromagnet whose field fluctuates about 100 times a second, causing the legs to flick back and forth. The legs on the left and right sides respond best to different frequencies, so the robot can be steered by adjusting the frequency…

While the ViRob can crawl through tubes or over surfaces, it cannot swim. For that, the Israeli team are designing another device, called SwiMicRob, which is slightly larger than ViRob at 10 millimetres long and 3 millimetres in diameter. Powered by an on-board motor, the device has two tails that twirl like bacteria’s flagella. SwiMicRob may one day be used inside fluid-filled spaces such those within the spine, although it is at an earlier stage of development than ViRob.

Another group has managed to shrink a medibot significantly further – down to 0.9 millimetres by 0.3 millimetres – by stripping out all propulsion and steering mechanisms. It is pulled around by electromagnets outside the body. The device itself is a metal shell shaped like a finned American football and it has a spike on the end…

The Swiss team is also among several groups who are trying to develop medibots at a vastly smaller scale, just nanometres in size, but these are at a much earlier development stage. Shrinking to this scale brings a host of new challenges, and it is likely to be some time before these kinds of devices reach the clinic.

Brad Nelson, a roboticist at the Swiss Federal Institute of Technology (EHT) in Zurich, hopes that if millimetre-sized devices such as his ophthalmic robot prove their worth, they will attract more funding to kick-start nanometre-scale research. “If we can show small devices that do something useful, hopefully that will convince people that it’s not just science fiction.”

In summary: nanoscale medibots appear plausible, but there’s still a large amount of research and development required.

Kurzweil’s prediction on timescales

The book “The Scientific Conquest of Death“, containing Kurzweil’s essay, was published in 2004.  The online version is dated 2003.  In 2003, 2010 – the end of the decade – presumably looked a long way off.  In the essay, Kurzweil makes some predictions about the speed of progress towards Human Body Version 2.0:

By the end of this decade, computing will disappear as a separate technology that we need to carry with us.  We’ll routinely have high-resolution images encompassing the entire visual field written directly to our retinas from our eyeglasses and contact lenses (the Department of Defense is already using technology along these lines from Microvision, a company based in Bothell, Washington).  We’ll have very-high-speed wireless connection to the Internet at all times.  The electronics for all of this will be embedded in our clothing.  Circa 2010, these very personal computers will enable us to meet with each other in full-immersion, visual-auditory, virtual-reality environments as well as augment our vision with location- and time-specific information at all times.

Progress with miniaturisation of computers – and the adoption of smartphones – has been impressive since 2003.  However, it’s now clear that some of Kurzweil’s predictions were over-optimistic.  If his predictions for 2010 were over-optimistic, what should we conclude about his predictions for 2030?

The conflicting pace of technological progress

My own view of predictions is that they are far from “black and white”.  I’ve made my own share of predictions over the years, about the rate of progress with smartphone technologies.  I’ve also reflected on the fact that it’s difficult to draw conclusions about the rate of change.

For example, from my “Insight” essay from November 2006, “The conflicting pace of mobile technology“:

What’s the rate of improvement of mobile phones?  Disconcertingly, the answer is both “surprisingly fast” and “surprisingly slow”…

A good starting point is the comment made by Monitor’s Bhaskar Chakravorti in his book “The slow pace of fast change”, when he playfully dubbed a certain phenomenon as “Demi Moore’s Law”.  The phenomenon is that technology’s impact in an inner-connected marketplace often proceeds at only half the pace predicted by Moore’s Law.  The reasons for this slower-than-expected impact are well worth pondering:

  • New applications and services in a networked marketplace depend on simultaneous changes being coordinated at several different points in the value chain
  • Although the outcome would be good for everyone if all players kept on investing in making the required changes, these changes make much less sense when viewed individually.

Sometimes this is called “the prisoner’s dilemma”.  It’s also known as “the chicken and egg problem”.

The most interesting (and the most valuable) smartphone services will require widespread joint action within the mobile industry, including maintaining openness to new ideas, new methods, and new companies.  It also requires a spirit of “cooperate before competing”.  If adjacent players in the still-formative smartphone value chain focus on fighting each other for dominance in our current small pie, it will prevent the stage-by-stage emergence of killer new services that will make the pie much larger for everyone’s benefit.

Thankfully, although the network effects of a complex marketplace can act to slow down the emergence of new innovations, while that market is still being formed, it can have the opposite effect once all the pieces of the smartphone open virtuous cycle have learned to collaborate with maximum effectiveness.  When that happens, the pace of mobile change can even exceed that predicted by Moore’s Law…

And from another essay in the same series, “A celebration of incremental improvement“, from February 2006:

We all know that it’s a perilous task to predict the future of technology.  The mere fact that a technology can be conceived is no guarantee that it will happen.

If I think back thirty-something years to my days as a teenager, I remember being excited to read heady forecasts about a near-future world featuring hypersonic jet airliners, nuclear fusion reactors, manned colonies on the Moon and Mars, extended human lifespans, control over the weather and climate, and widespread usage of environmentally friendly electric cars.  These technology forecasts all turned out, in retrospect, to be embarrassing rather than visionary.  Indeed, history is littered with curious and amusing examples of flawed predictions of the future.  You may well wonder, what’s different about smartphones, and about all the predictions made about them at 3GSM?

With the advantage of hindsight, it’s clear that many technology forecasts have over-emphasised technological possibility and under-estimated the complications of wider system effects.  Just because something is technically possible, it does not mean it will happen, even though technology enthusiasts earnestly cheer it on.  Technology is not enough.  Especially for changes that are complex and demanding, no fewer than six other criteria should be satisfied as well:

  • The technological development has to satisfy a strong human need
  • The development has to be possible at a sufficiently attractive price to individual end users
  • The outcome of the development has to be sufficiently usable, that is, not requiring prolonged learning or disruptive changes in lifestyle
  • There must be a clear evolutionary path whereby the eventual version of the technology can be attained through a series of incremental steps that are, individually, easier to achieve
  • When bottlenecks arise in the development process, sufficient amounts of fresh new thinking must be brought to bear on the central problems – that is, the development process must be both open (to accept new ideas) and commercially attractive (to encourage the generation of new ideas, and, even more important, to encourage companies to continue to search for ways to successfully execute their ideas; after all, execution is the greater part of innovation)…

Interestingly, whereas past forecasts of the future have often over-estimated the development of technology as a whole, they have frequently under-estimated the progress of two trends: computer miniaturisation and mobile communications.  For example, some time around 1997 I was watching a repeat of the 1960s “Thunderbirds” TV puppet show with my son.  The show, about a family of brothers devoted to “international rescue” using high-tech machinery, was set around the turn of the century.  The plot denouement of this particular episode was the shocking existence of a computer so small that it could (wait for it) be packed into a suitcase and transported around the world!  As I watched the show, I took from my pocket my Psion Series 5 PDA and marvelled at it – a real-life example of a widely available computer more powerful yet more miniature than that foreseen in the programme.

As I said, the pace of technological development is far from being black-and-white.  Sometimes it proceeds slower than you expect, and at other times, it can proceed much quicker.

The missing ingredient

With the advantage of even more hindsight, there’s one more element that should be elevated, as frequently making the difference between new products arriving sooner and them arriving later: the degree of practical focus and effective priority placed by the relevant ecosystem on creating these products.  For medibots and other lifespan-enhancing technologies to move from science fiction to science fact will probably require changes in both public opinion and public action.

It’s All About Nanobots

In a famous scene from the movie, The Graduate, Benjamin’s mentor gives him career advice in a single word: “plastics.”  Today, that word might be “software,” or “biotechnology,” but in another couple of decades, the word is likely to be “nanobots.”  Nanobots—blood-cell-sized robots—will provide the means to radically redesign our digestive systems, and, incidentally, just about everything else.

In an intermediate phase, nanobots in the digestive tract and bloodstream will intelligently extract the precise nutrients we need, call for needed additional nutrients and supplements through our personal wireless local area network, and send the rest of the food we eat on its way to be passed through for elimination.

19 November 2009

Progress at the Singularity University

Filed under: Singularity University — David Wood @ 10:30 am

I wonder if you’re going to be teaching at the Singularity University?

That was one of the questions a colleague asked me, when news broke recently that I would be leaving Symbian to explore alternative career options and future scenarios.

I was flattered.  The Singularity University is a recently founded “interdisciplinary university whose mission is to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges“.  As the Singularity University website continues,

With the support of a broad range of leaders in academia, business and government, SU hopes to stimulate groundbreaking, disruptive thinking and solutions aimed at solving some of the planet’s most pressing challenges. SU is based at the NASA Ames campus in Silicon Valley.

There’s already an impressive list of SU faculty and advisors.  I replied to my colleague that there were no plans for me to join this group – though I’ve been keeping an eye, from afar, on progress at the SU.

A few days ago, Business Week reported on the successful completion of the SU’s first 9-day executive program:

Singularity University Gives Execs a View of the Future

The school’s executive program offers participants the chance to learn and discuss how technology is changing, or even disrupting, their industries

In his various roles as a computer programmer, an emergency-medicine physician, and the director of Microsoft Medical Media Lab, Michael Gillam stays well ahead of the advances that are transforming health care. Yet even he can be caught unawares by the pace of technological change.

Gillam was reminded of this recently during a nine-day boot camp aimed at instructing professionals on how robotics, nanotechnology, biotechnology, and other cutting-edge disciplines are affecting industries. Gillam, one of 20 participants in Singularity University’s inaugural program for executives, was listening to futurist Ray Kurzweil. “We will have plenty of computation as we go through the 21st century,” Kurzweil told attendees in the small dining room featuring Spanish Mission-style decor. “That is not so controversial. The more controversial aspect is really, will we have the software?”

Watching the presentation, Gillam realized that the medical industry is woefully unprepared to handle and analyze the vast amounts of data likely to be unleashed in coming years as health records are digitized and physicians are able to track more information. “[I realized] we have to do this quickly,” Gillam says. “You look at those graphs and you feel a strong sense of urgency.”

That’s the kind of conceptual shift Singularity University’s creators hope to provoke. Kurzweil, author of The Singularity Is Near, and X Prize founder Peter Diamandis began Singularity earlier this year. Singularity offers a nine-week summer program for graduate students and the compressed session Gillam attended.

Preparing for Disruptive Innovation

Singularity’s founders and its executive director Salim Ismail, formerly head of Yahoo’s Brickhouse product incubator, want participants to leave with a sense of where opportunities lie—and the dangers of failing to prepare for them. “We want to help them avoid becoming the next Kodak,” Ismail says in reference to the film company that failed to prepare for the advent of digital photography

The 9-day course looks attractive – but carries a $15k price tag.  Despite this price, it seems there’s already considerable interest in the next run of the course, happening 26th Feb to 7th March:

18 companies, the governments of six countries, and representatives from four U.S. agencies have expressed interest in the next executive session, scheduled to start in February. Some companies have also approached him about creating an in-house version on their own sites, an option Ismail is considering. “The world is completely changing in every domain at a very fast pace,” Ismail says. The companies that are interested in Singularity University “think we have a finger on the pulse of how it’s going to change and how you can navigate that.”

In the meantime, videos of sessions from the executive program have started to appear.  David Orban has posted 8 videos at www.wired.it/video/persone.

ELF09: energy, sustainability, and more

Filed under: Economics, Energy, green, solar energy — David Wood @ 3:12 am

On Tuesday I attended the ninth Business Week “European Leadership Forum”, also known by its Twitter hash tag #elf09Business Week are to be congratulated for bringing together a fascinating group of industry leaders.

Here are a few of the points from the course of the day that made me think.

The threat of a new economic crisis

Professor Urs Muller, Managing Director and Chief Economist at BAK Basel Economics, had some worrying thoughts about the state of the global economy:

The good news is that the economic crisis is over.  The bad news is that the conditions responsible for the crisis are still intact, and the next crisis is already brewing.

Like various other speakers and panellists, Professor Muller was concerned about the state of regulation of banking activities.  As we discussed afterwards: “Who would be a regulator?”

It’s hard to identify and agree which elements of banking need new regulation regimes, and which don’t.  However, action by one country alone (for example, by the UK) would fail, since it would merely drive key lines of business elsewhere.  Coordination is needed – but hard!

I asked, how much time do we have?  Do governments have around ten years to reach agreement and take action, or are things more urgent?  Professor Muller replied that if matters were not resolved during 2010, it might already be too late.  Unfortunately, the side effect of the current crisis appearing to be over, is that government attention is liable to diminish.  Everyone is breathing a sigh of relief, prematurely.

This ominous discussion reminded me of remarks made by eminent economist and FT columnist John Kay a few days earlier, at a lunchtime meeting at the RSA, “Banking in the Wake of the Crisis: how will confidence be restored?”  That meeting addressed the questions:

  • Have banks and bankers have really learned the lessons of the crisis?
  • Are we in danger of falling into a dangerous cycle once more?

John Kay gave the answers No and Yes.

On a more positive note, Professor Muller highlighted the FSB (Financial Stability Board) as a cross-border organisation with a strong potential to address banking system vulnerabilities and to develop and implement strong regulatory, supervisory and other policies in the interest of financial stability.  John Kay’s recommendations – in favour of what is called “Narrow banking” – are contained in a 95-page PDF “The Reform of Banking Regulation” available from his website.

In search of the European Bill Gates

Earlier in the day, INSEAD Professor Soumitra Dutta and serial technology entrepreneur Niklas Zennström led a discussion “INNOVATION – What is the next generation? The next wave?”

Questions posed included why there was no real equivalent, in Europe, to Bill Gates, and which field of technology is likely to prove the most important in the near-term future.

I liked the answer given by Professor Dutta:

The next big wave of hitech innovation is improving the quality of life – including both improving the environment, and improving healthcare.

However, these technologies should not be viewed as alternatives to ICT (Information and Communications Technology).  Instead, these technology areas will succeed by implementing the next wave of ICT.  But instead of just experiencing “the Internet of websites”, we will see “the Internet of things”.

Alternatives to dependency on growth

Running near the surface of much of the discussion during the day was the theme of growth and sustainability.

Opening keynote speaker Stephen Green, Group Chairman of HSBC Holdings Plc, put it as follows:

The biggest change arising from the economic crisis is that companies must stop focussing on short-term value maximisation, and should instead focus on sustainable value maximisation.

Later, from the floor, Professor Dutta posed the simple question,

Is growth good?

I didn’t hear a satisfactory answer.  I did hear the answer that “business needs growth”, but that just skirts the issue.

Interestingly, Mikhail Gorbachev addressed the same issue in his keynote address at the General Assembly conference of the Club of Rome on 26 October 2009, in Amsterdam.  Here’s an extract:

A low-carbon economy is only a part of this new economic model we need so badly today. The model that has been around for the past five decades should be replaced. Of course, it cannot be achieved overnight, but I think we can already discuss reference points and general contours of this new model.

It means, above all, the overcoming of the economy’s ‘addiction’ to super-profits and hyper-consumption, which is not possible unless societies reshape their values. It means shifting of the increasingly larger swaths of the economy to production of ‘social goods’, among which the sustainable environment takes a centre stage.

These social goods also include human health in the broad sense of the word, education, culture, equal opportunities, and social unity, including the elimination of the glaring gaps between the rich and the poor.

Society needs all this not only because ethical imperatives dictate it. The economic benefits to be brought by these “goods” are enormous. However, economists are yet to learn how to measure them. An intellectual breakthrough is needed here. A new model of economy can not be built without it.

Energy and sustainability

The #elf09 gathering split up during the afternoon into a series of six parallel discussions.  Along with around 40 other people, I took part in a roundtable discussion on “Energy and sustainability”.

The discussion was led by Mark Williams, Downstream Director of Royal Dutch Shell, and Sophia Tickell, Executive Director of SustainAbility.

Mark Williams made the following points (I apologise in advance for condensing a much richer set of messages):

  • Almost certainly, the total energy needs of the world will double by 2050;
  • It seems highly unlikely that this vast energy requirement can be met by non-fossil fuels;
  • We need to prepare for a scenario in which at least 70% of the world’s energy needs in 2050 will still be met by fossil fuels;
  • In other words, “we have to come to grips with carbon”;
  • Even as we continue to rely on fossil fuels, we have to “decarbonise” the system;
  • There’s no reasonable alternative to developing and deploying technology for widespread CCS (Carbon Capture and Storage);
  • It’s already possible to store CO2 underground, safely, “for geological amounts of time”;
  • It’s true that there is public concern over the prospect of leaks of stored CO2, and over failures in warning systems to detect leaks, but “governments will have to take the lead in public education”.

Timescales to adopt new sources of energy

Mark Williams made the point that, so far, it has taken any new source of energy at least 25 years to achieve 1% of global energy delivery.  That point should be kept in mind, to avoid anyone becoming “too optimistic about new energy sources”.

In response, people around the table asked:

  • Would the equivalent of a war-time situation provide a different kind of reaction from both markets and governments?  Do we have to accept that we’ll have the same mindsets as before?

Mark answered:

  • Don’t underestimate “the tyranny of the installed base”;
  • Alternative energy sources have to face very significant issues with storage and transport: “electricity is not easily stored”.

I tried a different tack:

  • Consider the fact that, 25 years ago, there were virtually no mobile phones in use.  Over that timescale, enormous infrastructure has been put in place around the planet, and nowadays more than half of the world’s population use mobile phones.  Countless technical difficulties were solved en route;
  • Key to this build-out has been the fact that many companies were prepared to make huge financial investments, anticipating even larger financial paybacks as people use mobile technology;
  • If energy pricing is set properly (including full consideration for “negative externalities“), won’t companies find sufficient incentives to invest heavily in sustainable energy sources, and develop solutions – roughly similar to what happened for the mobile industry?
  • As a specific example, what about the prospects for gigantic harvesting of solar energy from a scheme such as Desertec (as described here)?

Mark answered:

  • The investment needed for new energy sources (at the scale required) dwarfs the investment even of the mobile telephony industry;
  • New energy sources have too much ground to catch up.  For example, every year, China installs as many additional coal-based energy generators as the entire existing UK installed base of such generators.

Around the table, it seemed generally agreed that we do need to prepare for a scenario in which fossil fuels remain in very substantial use over the decades ahead.

The role of green subsidies

Sophia Tickell raised the question of whether government subsidies could make a significant difference to the speed of transition to renewable energy sources.  South Korea is perhaps the leading example of where a government green stimulus package is having a significant effect.

Attractive beneficiaries for government subsidies (to recap earlier discussion) would presumably include products for electrical storage and CCS.

On the other hand, it’s possible for governments to pick losers as well as winners, with consequent waste of public funds.  Also, government subsidies can in some cases lead to technology failing to develop as efficiently and as innovatively as it ought to.  For this reason, it was suggested that “the environmental movement may have oversold the idea of a Green New Deal”.

Discussion continued:

  • Government should be putting the right framework in place, for market mechanisms to drive the selection and development of desirable products.  This includes identifying and allocating the costs of negative externalities, and establishing a proper “level playing field”;
  • When a desirable momentum is emerging in the marketplace, governments should be getting behind it.

I asked: is it already clear what is this “desirable momentum” that governments should be getting behind?  People around the table started listing options.  It quickly became a long list.  This provoked the following insightful comment from Juan Pablo Crespi, COO Europe of Alkol – to whom I’ll give the final word:

There are too many momentums – but not enough permanentums!

16 November 2009

Essays on unlimited lifespans

Filed under: aging, UKH+ — David Wood @ 1:27 am

In a couple of weekend’s time, on Saturday 28th November, I’ll be chairing a UKH+ meeting,

  • Successes and challenges en route to unlimited human lifespans: Q&A on the Immortality Institute

The main speaker at the event will be Shannon Vyff, Chair of the strikingly-named “Immortality Institute” – which describes its purpose on its website as “advocacy and research for unlimited lifespans”.  I’ve briefly met Shannon a couple of times at conferences, and found her to be articulate and well-informed.  Earlier this year, I read and enjoyed the book Shannon wrote primarily for teenage readers, “21st century kids: a trip from the future to you” (see here for my review).

To prepare myself for the meeting on 28th November, I’ve started reading another book: “The scientific conquest of death: essays on infinite lifespans“.  This book is published by the Immortality Institute and consists of a series of essays by 19 different authors (including a chapter by Shannon).

Here’s an extract from the introduction to the book:

The mission of the Immortality Institute is to conquer the blight of involuntary death. Some would consider this goal as scientifically impossible. Some would regard it as hubris…

Is it possible that scientists – or at least humankind – will “conquer the blight of involuntary death?” If so, to what extent will we succeed? What is in fact possible today, and what do the experts predict for the future? Is such a thing as ‘immortality’ feasible? Moreover, is it desirable? What would it mean from a political, social, ethical and religious perspective?  This book will help to explore these questions…

How would this book be special? After careful consideration, the answer seemed clear: This should be the first truly multidisciplinary approach to the topic. We would discuss not only biological theories of aging, but also biomedical strategies to counter it. Moreover, we would consider alternative approaches such as medical nanotechnology, digitalization of personhood, and cryobiological preservation. But this would only be part of the whole.

We also wanted to tackle some of the questions that are usually left unanswered in the last chapter of scientific books: If we accept that radical life extension is a real scientific possibility, then where does that leave us? Would it create overpopulation, stagnation and perpetual boredom? How would it change our society, our culture, our values and our spirituality? If science allows us to vastly extend our life span, should we do so?

I plan to write another blogpost once I’m further through the book.

In the meantime, I’d like to share a comment I made a few months back on the online letter pages of The Times.  I was writing in response to a leader article “Live For Ever: The promise of more and more life will bring us all problems“, and in particular, to answer a question posed to me by another correspondent.  Here’s my reply:

To answer your question, what do I personally see as the benefits of extending healthy human lifespan?

In short, life is good. Healthy, vibrant life is particularly good. While I have so many things I still look forwards to doing, I don’t want my life to end.

For example, I’d like to be able to share in the wonder and excitement of the scientific, engineering, artistic, and cultural improvements all throughout the present century – especially the development of “friendly super AI”. I’d like to have the time to explore many more places in the world, read many more books, learn much more mathematics, play golf on all the fine courses people talk about, and develop and deepen relations with wonderful people all over the world. I’d like to see and help my grandchildren to grow up, and their grandchildren to grow up.

Extending healthy lifespan will also have the benefit that the living wisdom and creativity of our elders will continue to be available to guide the rest of us through challenges and growth, rather than extinguishing.

In summary, I want to be alive and to actively participate when humankind moves to a higher level of consciousness, opportunity, accomplishment, wisdom, and civilisation – when we can (at last) systematically address the dreadful flaws that have been holding so many people back from their true potential.

I believe that most people have similar aspirations, but they learn to suppress them, out of a view that they are impractical. But science and engineering are on the point of making these aspirations practical, and we need new thinking to guide us through this grand, newly feasible opportunity.

I expect to revisit these topics during the meeting on 28th November.  I’m looking to gather a series of key questions that will highlight the core issues.

12 November 2009

Can Open Innovation help to save the world?

Filed under: climate change, Open Innovation — David Wood @ 1:19 am

One of the highlights at the FT Innovate 2009 conference in London this week was the presentation by UC Berkeley adjunct professor Henry Chesbrough on the topic “Open Innovation: Can it save the world?”

Dr Chesbrough is Executive Director of the Center for Open Innovation at the Haas School of Business at UC Berkeley, and inaugurated the whole field of research into Open Innovation with his 2003 book, “Open Innovation: The New Imperative for Creating And Profiting from Technology“.

Today’s talk was divided into two parts:

  1. A recap of previously published work – providing a whistlestop introduction to the concepts of Open Innovation;
  2. A proposal that the ideas of Open Innovation could usefully be applied in the context of log-jammed discussions over technologies to address climate change and renewable energy sources.

The background to Open Innovation was research that Henry Chesbrough did into research projects within Xerox PARC.  All companies need to make regular “tollgate review” decisions about which innovative research projects to cancel, and which to progress.  These decisions can go wrong in two different ways:

  • A “type one error” is when a project is continued for too long.  It looks promising, but it eventually fails to deliver.  In the process, it consumes budget, personnel, and management attention, which could (instead) have been applied on other projects;
  • A “type two error” is when a project is cancelled, that actually had the capability to generate lots of value.

Any process that decreases the chance of type one errors is likely, at the same time, to increase the chance of type two errors – and vice versa.  That’s a fact of life.  No company can have perfect foresight – given that markets change, technologies change, and projects change, all in unpredictable ways.

Chesbrough noted that cancellation is often surprisingly ineffective for innovation projects.  A company may withdraw its formal support, but the project can continue nevertheless.  For example, people inside the company who believe strongly in the project may work on that project outside of formal work hours, and may even cease employment at the company, in order to continue working on the idea in a new startup.

What happened to the projects that were shut down by the company (Xerox, in this case), but which had at least a temporary external lease of life?  The majority of these projects failed – providing an element of vindication for the company’s decision-making process.  But a number turned into spectacular successes, generating more stock market value in new companies outside Xerox than the value of Xerox itself.  (These startups include 3Com, VLSI, and Adobe.)  This again raises the question: in retrospect, can a parent company (Xerox, in this case) improve its decision-making and other innovation-review processes so as to reduce the impact of these type two errors?

The answer given by the theory of Open Innovation is that companies cannot and should not strive to avoid all such type two errors.  It is inevitable that some good ideas will be unable to flourish inside the company.  However, a change in mindset is required.  This new mindset makes it more likely that the company can still benefit from the fruit of the idea, even though development of the idea passes outside the company.  The new mindset (“Open Innovation”) can be contrasted as follows with a “Closed Innovation” mindset:

The “closed innovation” mindset:

  1. The smart people in our field work for us
  2. To profit from R&D we must discover it, develop it, and ship it ourselves
  3. If we discover it ourselves, we will get to the market first
  4. The company that gets an innovation to market first will win
  5. If we create the most and the best ideas in the industry, we will win
  6. We should control our IP, so that our competitors don’t profit from our ideas.

The “open innovation” mindset:

  1. Not all the smart people work for us. We need to work with smart people inside and outside our company
  2. External R&D can create significant value; internal R&D is needed to claim some portion of that value
  3. We don’t have to originate the research to profit from it
  4. Building a better business model is better than getting to market first
  5. If we make the best use of internal and external ideas, we will win
  6. We should profit from others’ use of our IP, and we should buy others’ IP whenever it advances our own business model.

A couple of diagrams help to highlight the contrast:

To be successful, the new mindset requires different skills from before – particularly skills in ecosystem management and IP management.

The really interesting question addressed by Chesbrough in today’s presentation is as follows: can these new skills help address issues of failed innovation management in the context of ideas for addressing runaway climate change, and the adoption of sustainable energy sources?

Chesbrough mentioned the GreenXchange supported by Science Commons.  To quote from their website:

Patent Strategies for Promoting Open Innovation

Nike and Creative Commons are calling upon other companies and stakeholders to bring the network efficiencies of open innovation to solving the problems of sustainability. GreenXchange will seek to bring together stakeholders in working groups to discuss strategies for advancing the commons by exploring ideas such as using patent pools, research non-assertions, and using technologies that support networked and community-based knowledge transfer and sharing.

Networks work best with a standardized and simple set of protocols. The Internet is one example of a network based on the TCP/IP Protocol. The Creative Commons community is a network based on users of Creative Commons licenses who share content under these standard transfer regimes. For the proposed network of sustainability innovation, the core protocols relate to the freedom to experiment and conduct research, the standardization of transfer of ideas, and the use of technology to monitor and quantify downstream impact.

Building a Better Innovation Ecosystem

Nike and Creative Commons share a vision of creating an open innovation platform that promotes the creation and adoption of technologies that have the potential to solve important global or industry-wide challenges. Open innovation is characterized by leveraging knowledge shared across many participants in a market, including companies, individuals, suppliers, distributors, academia, and many others to solve common problems and to assist internal innovation. Open innovation is an investment in the capacity of the market to support a firm’s ability to innovate and implement revolutionary technologies. It enables the development of new business models that leverage the creative output made possible by open collaboration to create new value and products. Open innovation is also a key component of engaging the resources and capabilities of large communities in finding ways to create sustainability, such as developing new ways to promote efficient resource use, implementing green manufacturing techniques, and delivery of products to consumers with lower impact to the environment.

Traditional collaboration is face-to-face. However, increasingly, modern collaboration, powered by the Web, is distributed. Examples of distributed collaboration include the Google search, the Wikipedia article, and the eBay auction, all which bring together disparate and distributed sources of information into a collaborative network mediated by common rules. Network mediated collaboration is based on small transactions, built upon standard technical and policy platforms, that enable low transaction costs both at a technical and legal level. By doing so, network mediated collaboration has a democratizing impact and therefore can engage mass audiences of users, contributors, and mediators, in ways that would otherwise be impossible. Likewise, open innovation is based on the mediated network collaboration concept: by making it easier to share documents, music, software, data, ideas, discoveries, and other kinds of knowledge, it has the potential to engage mass communities in the creative process. That brings with it innovation potential that not single company can match throw internally funded R&D…

The particular problem that Chesbrough mentioned as likely to obstruct progress in ongoing talks about measures to avoid runaway climate change is the following one.  Companies are, understandably, trying to develop new technologies that could help with processes such as carbon capture and storage, or moving to new sources of energy.  Being accountable to shareholders, these companies are driven to gain maximal financial return from the intellectual property they invest into these technologies.  With such a mindset, there is a risk that these companies will take decisions that result in the rough equivalent of the type two errors mentioned earlier: projects are stopped, because companies don’t see how to gain adequate financial return from them.

One response to this dilemma is to decry the financial motivation.  But another response is to seek a more enlightened operating model – once which will deliver both financial returns and highly worthwhile products.  This deserves more thought!

Footnote: The “Open Innovation blog“, by Joel West, one of Henry Chesbrough’s co-authors, is a mine of useful ideas about Open Innovation.

9 November 2009

Sustainable energy without the hot air

Filed under: books, Energy, Nuclear energy, solar energy — David Wood @ 1:00 am

Over the last ten days, I’ve been reading “Sustainable energy – without the hot air“.

It’s no surprise that the reviews for it on Amazon.com are, at time of writing, 95% 5-star, and only 5% 4-star.  In many way, this an exemplary book:

  1. The book is made up of easily digestible chunks;
  2. Each chunk contains numbers.  Anyone who disagrees with the conclusions of the book is therefore invited to identify the numbers that they disagree with;
  3. In each case, the author explains where the various numbers come from;
  4. The author makes the numbers seem plausible, but also provides copious references for people to investigate by themselves;
  5. Mathematical formulae are provided too – but separated into appendices at the end of the book, to avoid detracting from the main flow of the argument;
  6. The author punctures a lot of what might be called “hot air” – which he also calls “twaddle”: wishful thinking about how sustainable energy might be achieved;
  7. There are many “mythconceptions” sections where various widespread notions are gently but firmly dismantled;
  8. The text is accompanied by a set of very clear diagrams;
  9. The author sets out a range of possible solutions, rather than identifying a single way forwards;
  10. The author makes it clear that none of the solutions are going to be easy, and each will require substantial (“country-sized”) changes.

Since publishing the book, the author – David JC MacKay, physics professor at Cambridge University – has been appointed Chief Scientific Advisor at the UK’s Department of Energy and Climate Change – an appointment that took effect on 1st October 2009.

The author says he seeks to avoid being labelled as “pro-wind” or “pro-nuclear”, declaring instead that he wishes to be known as “pro-arithmetic”.  Whatever solutions are contemplated, he says, must meet the test of adding up.  He disagrees with those who say that “if everyone does a little, it will add up to a lot”.  Instead, he says, if everyone does a little, it will add up to a little.  That’s because of the scale of the total amount of energy used by an entire country.  Actions need to be effective:

Here are two simple individual actions. One is useless, one is very effective.

Turning phone chargers off when they are not in use is a feeble gesture, like bailing the Titanic with a teaspoon.

The widespread inclusion of “switching off phone chargers” in lists of “10 things you can do” is a bad thing, because it distracts attention from more effective actions that people could be taking.

In contrast, turning the thermostat down (or the air-conditioning in hot climates) is the single most effective energy-saving technology available to a typical person.

Every degree you turn it down will reduce your heating costs by 10%; and, speaking of Britain at least, heating is likely to be the biggest form of energy consumption in most buildings.

The entire book is available free online.  The online summaries (eg page 238239) reiterate the following point:

We have a clear conclusion: the non-solar renewables may be “huge,” but they are not huge enough. To complete a plan that adds up, we must rely on one or more forms of solar power. Or use nuclear power. Or both.

Any viable solar power solutions need to consider collecting energy from sunnier climates, and then transporting huge amounts of that energy to sun-deprived countries like the UK.  From page 178:

…focusing on Europe, “what area is required in the North Sahara to supply everyone in Europe and North Africa with an average European’s power consumption? Taking the population of Europe and North Africa to be 1 billion, the area required drops to 340 000 km2, which corresponds to a square 600 km by 600 km. This area is equal to one Germany, to 1.4 United Kingdoms, or to 16 Waleses.

The UK’s share of this 16-Wales area would be one Wales: a 145 km by 145 km square in the Sahara would provide all the UK’s current primary energy consumption.

Backing up this idea, David MacKay speaks favourably about the Desertec concept.  From the Desertec website:

In the upcoming decades, several global developments will create new challenges for mankind. We will be confronted with problems and obstacles such as climate change, population growth beyond earth’s capacity, and an increase in demand for energy and water caused by a strive for prosperity and expansion.

The DESERTEC Concept provides a way to solve these challenges…

The DESERTEC Concept describes the perspective of a sustainable supply of electricity for Europe (EU), the Middle East (ME) and North Africa (NA) up to the year 2050. It shows that a transition to competitive, secure and compatible supply is possible using renewable energy sources and efficiency gains, and fossil fuels as backup for balancing power.

A close cooperation between EU and MENA for market introduction of renewable energy and interconnection of electricity grids by high-voltage direct-current transmission are keys for economic and physical survival of the whole region. However, the necessary measures will take at least two decades to become effective. Therefore, adequate policy and economic frameworks for their realization must be introduced immediately. The role of sustainable energy to secure freshwater supplies based on seawater desalination is also addressed.

David MacKay’s chapter on nuclear energy is also an eye-opener.  It ably addresses the objections that have been made against nuclear energy.  Among the positive messages in this chapter:

…the nuclear energy available per atom is roughly one million times bigger than the chemical energy per atom of typical fuels. This means that the amounts of fuel and waste that must be dealt with at a nuclear reactor can be up to one million times smaller than the amounts of fuel and waste at an equivalent fossil-fuel power station.

…I conclude that ocean extraction of uranium would turn today’s once-through reactors into a “sustainable” option

…Japanese researchers have found a technique for extracting uranium from seawater at a cost of $100–300 per kilogram of uranium, in comparison with a current cost of about $20/kg for uranium from ore. Because uranium contains so much more energy per ton than traditional fuels, this 5-fold or 15-fold increase in the cost of uranium would have little effect on the cost of nuclear power: nuclear power’s price is dominated by the cost of power-station construction and decommissioning, not by the cost of the fuel. Even a price of $300/kg would increase the cost of nuclear energy by only about 0.3 p per kWh. The expense of uranium extraction could be reduced by combining it with another use of seawater – for example, power-station cooling.

…we must not let ourselves be swept off our feet in horror at the danger of nuclear power. Nuclear power is not infinitely dangerous. It’s just dangerous, much as coal mines, petrol repositories, fossil-fuel burning and wind turbines are dangerous. Even if we have no guarantee against nuclear accidents in the future, I think the right way to assess nuclear is to compare it objectively with other sources of power. Coal power stations, for example, expose the public to nuclear radiation, because coal ash typically contains uranium. Indeed, according to a paper published in the journal Science, people in America living near coal-fired power stations are exposed to higher radiation doses than those living near nuclear power plants.

…Spurred on by worries about nuclear accidents, engineers have devised many new reactors with improved safety features. The GT-MHR power plant, for example, is claimed to be inherently safe; and, moreover it has a higher efficiency of conversion of heat to electricity than conventional nuclear plants

…the volumes are so small, I feel nuclear waste is only a minor worry, compared with all the other forms of waste we are inflicting on future generations. At 25 ml per year, a lifetime’s worth of high-level nuclear waste would amount to less than 2 litres. Even when we multiply by 60 million people, the lifetime volume of nuclear waste doesn’t sound unmanageable: 105 000 cubic metres. That’s the same volume as 35 olympic swimming pools. If this waste were put in a layer one metre deep, it would occupy just one tenth of a square kilometre.

There are already plenty of places that are off-limits to humans. I may not trespass in your garden. Nor should you in mine. We are neither of us welcome in Balmoral. “Keep out” signs are everywhere. Downing Street, Heathrow airport, military facilities, disused mines – they’re all off limits. Is it impossible to imagine making another one-square-kilometre spot – perhaps deep underground – off limits for 1000 years?

…the assertion that “civil nuclear construction on this scale is a pipe dream, and completely unfeasible” is poppycock. Yes, it’s a big construction rate, but it’s in the same ballpark as historical construction rates.

So far, I haven’t found any significant criticism of the points made in this book.  It’s highly recommended.  You may also enjoy David MacKay’s blog.

Footnote: for further reading on nuclear energy, take a look at “10 reasons to support nuclear power“; and for more about Desertec, see their FAQ.

« Newer PostsOlder Posts »

Blog at WordPress.com.