dw2

2 January 2010

Vital for futurists: hacking the earth

Filed under: books, climate change, futurist, geoengineering — David Wood @ 1:16 am

Here’s a tip, for anyone seriously interested in the big issues that will dominate discussion in the next 5-10 years.  You should become familiar (if you’re not already) with the work of Jamais Cascio.  Jamais is someone who consistently has deep, interesting, and challenging things to say about the large changes that are likely to sweep over the planet in the decades ahead.

In 2003, Jamais co-founded WorldChanging.com, a website dedicated to finding and calling attention to models, tools and ideas for building a “bright green” future. In March, 2006, he started Open the Future.

One topic that Jamais has often addressed is geoengineering – sometimes also called “climate engineering”, “planetary engineering”, or “terraforming”.  Geoengineering covers a range of large-scale projects that could, conceivably, be deployed to head-off the effects of runaway global warming.  Examples include launching large mirrors into space to reflect sunlight away from the earth, injecting sulphate particles into the stratosphere, brightening clouds or deserts to increase their reflectivity, and extracting greenhouse gases from the atmosphere.  It’s a thoroughly controversial topic.  But Jamais treads skilfully and thoughtfully through the controversies.

A collection of essays by Jamais on the topic of geoengineering is available in book format, under the title “Hacking the earth: understanding the consequences of geoengineering“.  It’s a slim volume, with just over 100 pages, but it packs lots of big thoughts.  While reading, I found myself nodding in agreement throughout the book.

At present, this book is only available from Lulu.com.  As Jamais says, the book is, for him:

an experiment in self-publishing…

… in recent weeks various friends have tried out – and given high marks to – web-based self-publishing outfits like Lulu.com… I thought I’d give this method a shot.

The material in the book is derived from articles published online at Open the Future and elsewhere.  Some of the big themes are as follows (the following bullet points are all excerpts from Jamais’ writing):

  • Feedback effects ranging from methane released from melting permafrost to carbon emissions from decaying remnants of forests devoured by pine beetles risk boosting greenhouse gases faster than natural compensation mechanisms can handle.  The accumulation of non-linear drivers can lead to “tipping point” events causing functionally irreversible changes to geophysical systems (such as massive sea-level increases).  Some of these can have feedback effects of their own, such as the elimination of ice caps reducing global albedo, thereby accelerating heating.
  • None of the bright green solutions — ultra-efficient buildings and vehicles, top-to-bottom urban redesigns, local foods, renewable energy systems, and the like — will do anything to reduce the anthropogenic greenhouse gases that have already been emitted. The best result we get is stabilizing at an already high greenhouse gas level. And because of ocean thermal inertia and other big, slow climate effects, the Earth will continue to warm for a couple of decades even after we stop all greenhouse gas emissions. Transforming our civilization into a bright green wonderland won’t be easy, and under even the most optimistic estimates will take at least a decade; by the time we finally stop putting out additional greenhouse gases, we could well have gone past a point where globally disastrous results are inevitable. In fact, given the complexity of climate feedback systems, we may already have passed such a tipping point, even if we stopped all emissions today.
  • Geoengineering, should it be tried, would not be a replacement for making the economic, social, and technological changes needed to eliminate anthropogenic greenhouse gases. It would only be a way of giving us more time to make those changes. It’s not an either-or situation; geo is a last-ditch prop for making sure that we can do what needs to be done.
  • We don’t know enough about how the various geoengineering proposals would play out to make a persuasive case for trying any of them.  There needs to be far more study before making any even moderate-scale experimental effort. This is not something to try today. The most important task for current geoengineering research is to identify the approaches that might look attractive at first, but have devastating results — we need to know what we should avoid even if desperate.
  • Like it or not, we’ve entered the era of intentional geoengineering. The people who believe that (re)terraforming is a bad idea need to be part of the discussion about specific proposals, not simply sources of blanket condemnations. We need their insights and intelligence. The best way to make that happen, the best way to make sure that any terraforming effort leads to a global benefit, not harm, is to open the process of studying and developing geotechnological tools.
  • Geoengineering presents more than just an environmental question. It also presents a geopolitical dilemma. With processes of this magnitude and degree of uncertainty, countries would inevitably argue over control, costs, and liability for mistakes. More troubling, however, is the possibility that states may decide to use geoengineering efforts and technologies as weapons. Two factors make this a danger we dismiss at our peril: the unequal impact of climate changes, and the ability of small states and even nonstate actors to attempt geoengineering.
  • It is possible that, should the international community refrain from geoengineering strategies, one or more smaller, non-hegemonic, actors could undertake geoengineering projects of their own. This could be out of a legitimate fear that prevention and mitigation strategies would be insufficient, out of a disagreement with the consensus over geoengineering safety or results, or—most troublingly—out of a desire to use geoengineering tools to achieve a relative increase in competitive power over adversaries.

I particularly liked Jamais’ suggestion of a “Reversibility Principle” as an alternative to the “Precautionary Principle” and “Proactionary Principle” that have previously been suggested as guidelines for deciding which actions to take, regarding the application of technology.

Geoengineering is, by its nature, a huge topic.  The “Technology Review” magazine contains a substantial analysis entitled “The Geoengineering Gambit” in its Jan-Feb 2010 edition. And the authors of Freakonomics, Stephen J Dubner and Steven Levitt, included a chapter on geoengineering in their follow-up book, “Superfreakonomics“.  As it happens, there seems to be wide consensus that the freakonomics team were considerably too hasty in their analysis – see for example the Guardian article “Why Superfreakonomics’ authors are wrong on geo-engineering“.  But the fact that there were mistakes in that analysis doesn’t mean the topic itself should fade from view.

Far from it: I’m sure we’re going to be hearing more and more about geoengineering.  It deserves our attention!

24 December 2009

Predictions for the decade ahead

Before highlighting some likely key trends for the decade ahead – the 2010’s – let’s pause a moment to review some of the most important developments of the last ten years.

  • Technologically, the 00’s were characterised by huge steps forwards with social computing (“web 2.0”) and with mobile computing (smartphones and more);
  • Geopolitically, the biggest news has been the ascent of China to becoming the world’s #2 superpower;
  • Socioeconomically, the world is reaching a deeper realisation that current patterns of consumption cannot be sustained (without major changes), and that the foundations of free-market economics are more fragile than was previously widely thought to be the case;
  • Culturally and ideologically, the threat of militant Jihad, potentially linked to dreadful weaponry, has given the world plenty to think about.

Looking ahead, the 10’s will very probably see the following major developments:

  • Nanotechnology will progress in leaps and bounds, enabling increasingly systematic control, assembling, and reprogamming of matter at the molecular level;
  • In parallel, AI (artificial intelligence) will rapidly become smarter and more pervasive, and will be manifest in increasingly intelligent robots, electronic guides, search assistants, navigators, drivers, negotiators, translators, and so on.

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

We’ll see the following applications of nanotechnology and AI:

  • Energy harvesting, storage, and distribution (including via smart grids) will be revolutionised;
  • Reliance on existing means of oil production will diminish, being replaced by greener energy sources, such as next-generation solar power;
  • Synthetic biology will become increasingly commonplace – newly designed living cells and organisms that have been crafted to address human, social, and environmental need;
  • Medicine will provide more and more new forms of treatment, that are less invasive and more comprehensive than before, using compounds closely tailored to the specific biological needs of individual patients;
  • Software-as-a-service, provided via next-generation cloud computing, will become more and more powerful;
  • Experience of virtual worlds – for the purposes of commerce, education, entertainment, and self-realisation – will become extraordinarily rich and stimulating;
  • Individuals who can make wise use of these technological developments will end up significantly cognitively enhanced.

In the world of politics, we’ll see more leaders who combine toughness with openness and a collaborative spirit.  The awkward international institutions from the 00’s will either reform themselves, or will be superseded and surpassed by newer, more informal, more robust and effective institutions, that draw a lot of inspiration from emerging best practice in open source and social networking.

But perhaps the most important change is one I haven’t mentioned yet.  It’s a growing change of attitude, towards the question of the role in technology in enabling fuller human potential.

Instead of people decrying “technical fixes” and “loss of nature”, we’ll increasingly hear widespread praise for what can be accomplished by thoughtful development and deployment of technology.  As technology is seen to be able to provide unprecedented levels of health, vitality, creativity, longevity, autonomy, and all-round experience, society will demand a reprioritisation of resource allocation.  Previous sacrosanct cultural norms will fall under intense scrutiny, and many age-old beliefs and practices will fade away.  Young and old alike will move to embrace these more positive and constructive attitudes towards technology, human progress, and a radical reconsideration of how human potential can be fulfilled.

By the way, there’s a name for this mental attitude.  It’s “transhumanism”, often abbreviated H+.

My conclusion, therefore, is that the 2010’s will be the decade of nanotechnology, AI, and H+.

As for the question of which countries (or regions) will play the role of superpowers in 2020: it’s too early to say.

Footnote: Of course, there are major possible risks from the deployment of nanotechnology and AI, as well as major possible benefits.  Discussion of how to realise the benefits without falling foul of the risks will be a major feature of public discourse in the decade ahead.

12 November 2009

Can Open Innovation help to save the world?

Filed under: climate change, Open Innovation — David Wood @ 1:19 am

One of the highlights at the FT Innovate 2009 conference in London this week was the presentation by UC Berkeley adjunct professor Henry Chesbrough on the topic “Open Innovation: Can it save the world?”

Dr Chesbrough is Executive Director of the Center for Open Innovation at the Haas School of Business at UC Berkeley, and inaugurated the whole field of research into Open Innovation with his 2003 book, “Open Innovation: The New Imperative for Creating And Profiting from Technology“.

Today’s talk was divided into two parts:

  1. A recap of previously published work – providing a whistlestop introduction to the concepts of Open Innovation;
  2. A proposal that the ideas of Open Innovation could usefully be applied in the context of log-jammed discussions over technologies to address climate change and renewable energy sources.

The background to Open Innovation was research that Henry Chesbrough did into research projects within Xerox PARC.  All companies need to make regular “tollgate review” decisions about which innovative research projects to cancel, and which to progress.  These decisions can go wrong in two different ways:

  • A “type one error” is when a project is continued for too long.  It looks promising, but it eventually fails to deliver.  In the process, it consumes budget, personnel, and management attention, which could (instead) have been applied on other projects;
  • A “type two error” is when a project is cancelled, that actually had the capability to generate lots of value.

Any process that decreases the chance of type one errors is likely, at the same time, to increase the chance of type two errors – and vice versa.  That’s a fact of life.  No company can have perfect foresight – given that markets change, technologies change, and projects change, all in unpredictable ways.

Chesbrough noted that cancellation is often surprisingly ineffective for innovation projects.  A company may withdraw its formal support, but the project can continue nevertheless.  For example, people inside the company who believe strongly in the project may work on that project outside of formal work hours, and may even cease employment at the company, in order to continue working on the idea in a new startup.

What happened to the projects that were shut down by the company (Xerox, in this case), but which had at least a temporary external lease of life?  The majority of these projects failed – providing an element of vindication for the company’s decision-making process.  But a number turned into spectacular successes, generating more stock market value in new companies outside Xerox than the value of Xerox itself.  (These startups include 3Com, VLSI, and Adobe.)  This again raises the question: in retrospect, can a parent company (Xerox, in this case) improve its decision-making and other innovation-review processes so as to reduce the impact of these type two errors?

The answer given by the theory of Open Innovation is that companies cannot and should not strive to avoid all such type two errors.  It is inevitable that some good ideas will be unable to flourish inside the company.  However, a change in mindset is required.  This new mindset makes it more likely that the company can still benefit from the fruit of the idea, even though development of the idea passes outside the company.  The new mindset (“Open Innovation”) can be contrasted as follows with a “Closed Innovation” mindset:

The “closed innovation” mindset:

  1. The smart people in our field work for us
  2. To profit from R&D we must discover it, develop it, and ship it ourselves
  3. If we discover it ourselves, we will get to the market first
  4. The company that gets an innovation to market first will win
  5. If we create the most and the best ideas in the industry, we will win
  6. We should control our IP, so that our competitors don’t profit from our ideas.

The “open innovation” mindset:

  1. Not all the smart people work for us. We need to work with smart people inside and outside our company
  2. External R&D can create significant value; internal R&D is needed to claim some portion of that value
  3. We don’t have to originate the research to profit from it
  4. Building a better business model is better than getting to market first
  5. If we make the best use of internal and external ideas, we will win
  6. We should profit from others’ use of our IP, and we should buy others’ IP whenever it advances our own business model.

A couple of diagrams help to highlight the contrast:

To be successful, the new mindset requires different skills from before – particularly skills in ecosystem management and IP management.

The really interesting question addressed by Chesbrough in today’s presentation is as follows: can these new skills help address issues of failed innovation management in the context of ideas for addressing runaway climate change, and the adoption of sustainable energy sources?

Chesbrough mentioned the GreenXchange supported by Science Commons.  To quote from their website:

Patent Strategies for Promoting Open Innovation

Nike and Creative Commons are calling upon other companies and stakeholders to bring the network efficiencies of open innovation to solving the problems of sustainability. GreenXchange will seek to bring together stakeholders in working groups to discuss strategies for advancing the commons by exploring ideas such as using patent pools, research non-assertions, and using technologies that support networked and community-based knowledge transfer and sharing.

Networks work best with a standardized and simple set of protocols. The Internet is one example of a network based on the TCP/IP Protocol. The Creative Commons community is a network based on users of Creative Commons licenses who share content under these standard transfer regimes. For the proposed network of sustainability innovation, the core protocols relate to the freedom to experiment and conduct research, the standardization of transfer of ideas, and the use of technology to monitor and quantify downstream impact.

Building a Better Innovation Ecosystem

Nike and Creative Commons share a vision of creating an open innovation platform that promotes the creation and adoption of technologies that have the potential to solve important global or industry-wide challenges. Open innovation is characterized by leveraging knowledge shared across many participants in a market, including companies, individuals, suppliers, distributors, academia, and many others to solve common problems and to assist internal innovation. Open innovation is an investment in the capacity of the market to support a firm’s ability to innovate and implement revolutionary technologies. It enables the development of new business models that leverage the creative output made possible by open collaboration to create new value and products. Open innovation is also a key component of engaging the resources and capabilities of large communities in finding ways to create sustainability, such as developing new ways to promote efficient resource use, implementing green manufacturing techniques, and delivery of products to consumers with lower impact to the environment.

Traditional collaboration is face-to-face. However, increasingly, modern collaboration, powered by the Web, is distributed. Examples of distributed collaboration include the Google search, the Wikipedia article, and the eBay auction, all which bring together disparate and distributed sources of information into a collaborative network mediated by common rules. Network mediated collaboration is based on small transactions, built upon standard technical and policy platforms, that enable low transaction costs both at a technical and legal level. By doing so, network mediated collaboration has a democratizing impact and therefore can engage mass audiences of users, contributors, and mediators, in ways that would otherwise be impossible. Likewise, open innovation is based on the mediated network collaboration concept: by making it easier to share documents, music, software, data, ideas, discoveries, and other kinds of knowledge, it has the potential to engage mass communities in the creative process. That brings with it innovation potential that not single company can match throw internally funded R&D…

The particular problem that Chesbrough mentioned as likely to obstruct progress in ongoing talks about measures to avoid runaway climate change is the following one.  Companies are, understandably, trying to develop new technologies that could help with processes such as carbon capture and storage, or moving to new sources of energy.  Being accountable to shareholders, these companies are driven to gain maximal financial return from the intellectual property they invest into these technologies.  With such a mindset, there is a risk that these companies will take decisions that result in the rough equivalent of the type two errors mentioned earlier: projects are stopped, because companies don’t see how to gain adequate financial return from them.

One response to this dilemma is to decry the financial motivation.  But another response is to seek a more enlightened operating model – once which will deliver both financial returns and highly worthwhile products.  This deserves more thought!

Footnote: The “Open Innovation blog“, by Joel West, one of Henry Chesbrough’s co-authors, is a mine of useful ideas about Open Innovation.

25 June 2008

A tale of two meetings

Filed under: climate change, collaboration, Nuclear energy, SitP, solar energy, Spiked — David Wood @ 10:31 pm

In the past, I’ve enjoyed several meetings of the London Skeptics in the Pub (“SitP”). More than 100 people cram into the basement meeting space of Penderel’s Oak in Holborn, and listen to a speaker cover a contentious topic – such as alternative medicine, investigating the paranormal, the “moon landings hoax”. What’s typically really enjoyable is the extended Q&A session in the second half of the meeting, when the audience often dissect the speaker’s viewpoint. Attendee numbers have crept up steadily over the nine years the group has existed. It’s little surprise that the group was voted into the Top Ten London Communities 2008 by Time Out.

Last night, the billed speaker was the renowned (many would say “infamous”) climate change denier, Fred Singer. The talk was advertised as follows:

Global Warming: Science, Economics, and some Moral Issues: What Al Gore Never Told You.

The science is settled: Evidence clearly demonstrates that carbon dioxide contributes insignificantly to Global Warming and is therefore not a ‘pollutant.’ This fact has not yet been widely recognized, and irrational Global Warming fears continue to distort energy policies and foreign policy. All efforts to curtail CO2 emissions, whether global, federal, or at the state level, are pointless — and in any case, ineffective and very costly. On the whole, a warmer climate is beneficial. Fred will comment on the vast number of implications.

Since this viewpoint is so far removed from consensus scientific thinking, I was hoping for a cracking debate. And indeed, the evening started well. Singer turned out to be a better speaker than I expected. Even though he’s well into his 80s, he spoke with confidence, courtesy, and good humour. And he had some interesting material:

  • A graph that seemed to show that global temperature has not been rising over the last ten years (even though atmospheric CO2 has incontrovertibly been rising over that time period)
  • A claim that all scientific models of atmospheric warming are significantly at variance with observed data (and therefore, we shouldn’t give these models much credence)
  • Suggestions that global warming is more strongly influenced by cosmic rays than by atmospheric CO2.

(The contents of the talk were similar to what’s in this online article.)

So I eagerly anticipated the Q&A. But oh, what a disappointment. I found myself more and more frustrated:

  • Quite a few of the audience members seemed incapable of asking a clear, relevant, concise question. Instead, they tended to go off on tangents, or went round and round in circles. (To my mind, the ability to identify and ask the key question, without distraction, is an absolutely vital skill for the modern age.)
  • Alas, the speaker could not hear the questions (being, I guess, slightly deaf from his advanced age); so they had to be repeated by the meeting moderator, who was standing at the front next to the speaker
  • The moderator often struggled to capture the question from what the audience member had said, so there were several iterations here
  • Then the speaker frequently took a LONG time to answer the question. (He was patient and polite, but he was also painstakingly SLOW.)

Result: lots of time wasted, in my view. No one landed anything like a decisive refutation of the speaker’s claims. There were lots of good questions that should have been asked, but time didn’t allow it. I also blamed myself, for not having done any research prior to the meeting (but I had been pretty busy on other matters for the last few days), and for not being able to do my usual trick of looking up information on my smartphone during a meeting (via Google, Wikipedia, etc) because network reception was very poor in the part of the basement where I was standing. In conclusion, although the discussion was fun, I don’t think we got anything like the best possible discussion that the speakers’ presentation deserved.

I mention all this, not just because I’m deeply concerned about the fearsome prospects of runaway global warming, but also because I’m interested in the general question of how to organise constructive debates that manage to reach to the heart of the matter (whatever the matter is).

As an example of a meeting that did have a much better debate, let me mention the one I attended this evening. It was hosted by Spiked, and was advertised as follows:

Nuclear power: what’s the alternative? The future of energy in Britain

As we seek to overcome our reliance on fossil fuels, what are the alternatives? Offshore turbines and wind farms are often cited as options but can they really meet more than a fraction of the UK’s energy needs? If not, is nuclear power a viable alternative? Public anxieties about nuclear plants’ safety, their susceptibility to terrorist attacks, and the problem of safely disposing of radioactive waste persist. But to what extent are these concerns justified? Is the real issue the public’s perception of both the risks and potential of nuclear energy? Ultimately, does nuclear energy, be it the promise of fusion or the reality of fission, finally mean we can stop guilt-tripping about energy consumption?

Instead of just one speaker, there were five, who had a range of well-argued but differing viewpoints. And the chairperson, Timandra Harkness (Director of Cheltenham Science Festival’s Fame Lab) was first class:

  • She made it clear that each speaker was restricted to 7 minutes for their opening speech (and they all kept to this limit, with good outcomes: focus can have wonderful results)
  • Then there were around half a dozen questions from the floor, asked one after the other, before the speaker panel were invited to reply
  • There were several more rounds of batched up questions followed by responses
  • Because of the format, the speakers had the option of ignoring the (few) irrelevant questions, and could concentrate on the really interesting ones.

For the record, I thought that all the speakers made good points, but Keith Barnham, co-founder of the solar cell manufacturing company Quantasol, was particularly interesting, with his claims for the potential of new generation photovoltaic concentrator solar cells. (This topic also featured in a engrossing recent Time article.) He recommended that we put our collective hope for near-future power generation “in the [silicon] industry that gave us the laptop and the mobile phone, rather than the industry that gave us Chernobyl and Sellafield”. (Ouch!) Advances in silicon have time and again driven down the prices of mobile phones; these benefits will also come quickly (Barnham claimed) to the new generation solar cells.

But the conclusion I want to draw is that the best way to ensure a great debate is to have a selection of speakers with complementary views, to insist on focus, and to chair the meeting particularly well. Yes, collaboration is hard – but when it works, it’s really worth it!

Footnote: the comparision between the Skeptics in the Pub meeting and the Spiked one is of course grossly unfair, since the former is run on a shoestring (there’s a £2 charge to attend) whereas the latter has a larger apparatus behind it (the entry charge was £10, payable in advance; and there’s corporate sponsorship from Clarke Mulder Purdie). But hey, I still think there are valid learnings from this tale of two different meetings – each interesting and a good use of time, but one ultimately proving much more satisfactory than the other.

« Newer Posts

Blog at WordPress.com.