dw2

28 December 2009

Alternative perspectives on 21C technology innovation and life

Filed under: futurist, innovation — David Wood @ 6:26 pm

While browsing through Andrew Maynard’s stimulating blog “2020 science: Providing a clear perspective on developing science and technology responsibly“, I’ve run across a fascinating series of articles, authored by 10 different guest bloggers, entitled

Technology innovation, life, and the 21st century – ten alternative perspectives

Andrew introduces the series as follows:

Life in the 21st century is going to depend increasingly on technology innovation.  Governments believe this.  Industry believes this.  Scientists believe this.  But is technology innovation really the solution to every challenge facing humanity, or have we got hooked on an innovation habit so deeply we don’t even see the addiction?  And even if it is important – essential even – who decides which innovations are nurtured and how they are used?

I must confess I’m a staunch believer in the importance of technology innovation.  But I was reminded recently that not everyone sees the world in the same way, and that there are very different but equally valid perspectives on how science and technology should be used within society.

As a result, I decided to commission ten guest blogs on technology innovation from people working for, associated with or generally reflecting the views of Civic Society groups.  The aim was twofold – to expose readers to perspectives on technology innovation that are sometimes drowned out in mainstream conversations, and to give a sense of the breadth of opinions and perspectives that are often lumped under the banners of “civic society” or “Non Government Organizations”…

Before I pull out some comments from these ten guest blogs, let me declare my own bias.

Briefly, it is my view that

  • Humanity in the 21st century is facing both enormous challenges and enormous opportunities;
  • Wise application of technology is the factor that will make the single biggest difference to successfully addressing these challenges and opportunities;
  • If we get things right, human experience all over the world in just a few decades time will be very substantially better than it is today;
  • Technology can be accelerated by commercial factors, such as the operation of free markets, but these forces need review, supervision, and regulation, to increase the likelihood that the outcomes are truly beneficial;
  • Technology can also be accelerated by supporting, educating, encouraging, inpsiring, and enabling larger numbers of skilled engineers worldwide, to work in open and collaborative ways;
  • At the same time as we involve more people in the development of technology, we should be involving more people in informed open deep debates about the management of the development of technology.

In other words, I am a strong optimist about the joint potential for technology, markets, engineers, and open collaboration, but I deeply fear allowing any one of these factors to become overly dominant.

Any view, for example, that “markets are always right” or that “technology will always be beneficial”, is a dangerous simplification.  The application of technology will only be “wise” (to return to the word I used above) if the powerful engines of market economics and technology development are overseen and directed by suitable review bodies, acting on behalf of society as a whole.  The manner of operation of these review bodies, in turn, needs to be widely debated.  In these debates, there can be nothing sacrosanct or above question.

The essays in the guest series on the 2020 science blog make some good contributions to these debates.   I don’t agree with all the points made, but the points all deserve a hearing.

The series starts with an essay “Biopolitics for the 21st Century” written by Marcy Darnovsky, Associate Executive Director of the Center for Genetics and Society.  Here are some extracts:

One challenge we face … is a tendency toward over-enthusiasm about prospective technologies. Another is the entanglement of technology innovation and commercial dynamics. Neither of these is brand new.

Back in the last century, the 1933 Chicago World’s Fair took “technological innovation” as its theme and “A Century of Progress” as its formal name. Its official motto was “Science Finds, Industry Applies, Man Conforms.”The slogan shamelessly depicts “science” and “industry” as dictator – or at least drill sergeant – of humanity. It anoints industrial science as a rightful decision-maker about human ends, and an inevitable purveyor of societal uplift.

Today the 1933 World’s Fair slogan seems altogether crass. But have we earned our cringe? We’d like to think that we’re more realistic about science and technology innovations. We want to believe that, in some collective sense, we’re in control of their broad direction. But are we less giddy about the techno-future now than we were back then?  Does technology innovation now serve human needs rather than the imperatives of commerce? Have we devised social and cultural innovations for shaping new technologies – do we have robust democratic mechanisms that encourage citizens and communities to participate meaningfully in decisions about their development, use and regulation?

I’m afraid that the habits of exaggerating the benefits of new technologies and minimizing their unwanted down sides are with us still…

Technology innovation is increasingly dominated by large-scale commercial imperatives.  Over the past century, and ever more so since the 1980 Bayh-Dole Act (an attempt to spur innovation by allowing publicly funded researchers to profit from their work), innovators have become scientist-entrepreneurs, and universities something akin to corporate incubators.

Commercial dynamics have become particularly influential in the biosciences. It’s hard to imagine any scientist today responding as Jonas Salk did in 1955, when he said with a straight face that “the people” own the polio vaccine. “There is no patent,” he told legendary news broadcaster Edward R. Murrow. “Could you patent the sun?”

Of course, entrepreneurial activity in technology and science often delivers important benefits. It can bring new discoveries and techniques to fruition quickly, and make them available rapidly. Some recent commercial technologies, most notably in digital communication and computing, are stunning indeed.

But how far have we come from the slogan of the 1933 World’s Fair? Technology developers still routinely present their plans either as “inevitable” or as crucial for economic growth. As for the rest of us, we have few opportunities to deliberate – especially as citizens, but also as consumers – about the risks as well as the benefits of technology innovations. Twenty-first century societies and communities too often wind up conforming to new technologies rather than finding ways to shape their goals and direction…

Georgia Miller, who coordinates Friends of the Earth Australia’s Nanotechnology Project, wrote an essay “Beyond safety: some bigger questions about new technologies” for the series.  The essay includes the following:

The promise that a given new technology will deliver environmentally benign electricity too cheap to meter, end hunger and poverty, or cure disease is very seductive. That is why the claims are made with many emerging technologies – nuclear power, biotechnology and nanotechnology, to name a few.

However history shows that such optimistic predictions are never achieved in reality. In addition to benefits, new technologies come with social, economic and environmental costs, and sometimes significant political implications.

Still, when it comes to public communication or policy making about nanotechnology, we’re often presented with the limited notion of weighing up predicted ‘benefits’ versus ‘risks’…

This framing ignores the broader costs and transformative potential of new technologies. It suggests that if we can only make nanotechnology ‘safe’, its development will necessarily deliver wealth, health, social opportunities and even environmental gains.

Ensuring technology safety is clearly very important. But simply assuming that ‘safe’ technology will deliver nothing but benefits, and that these benefits will be available to everyone, is – to put it mildly – quite optimistic.

To evaluate whether or not new technologies will help or hinder efforts to address the great ecological and social challenges of our time, we need to dig a little deeper…

Our experience also teaches us that environmentally or socially promising technologies will not necessarily be adopted, especially if they challenge the status quo. The government of Australia, one of the sunniest countries on earth, has pledged billions of dollars to cushion the coal industry from the effects of a proposed carbon trading system, while offering scant support to the fledgling solar energy sector.

There is a tendency to focus on the potential of new technologies to address our most pressing problems, rather than to seek better deployment of existing technologies, better design of existing systems, or changes in production and consumption. This reflects a preference to avoid systemic change. It also reflects an unfounded optimism that the ‘solution’ lies just over the horizon.

But sometimes ensuring better deployment of existing technologies is the most effective way to deal with a problem. Just as wider accessibility of existing drugs and medical treatments could prevent a huge number of deaths world-wide, improving urban storm water harvesting and re-use, housing insulation and mass transit public transport could go a long way to reducing our ecological footprint – potentially at a lower cost and at lower risk than mooted high tech options.

If evaluating the implementation or performance failures of previous technologies reveals economic or social obstacles or constraints, it’s probably these factors that warrant our attention. There is no reason to believe they will magically disappear once new technologies arrive…

Geoff Tansey of the Food Ethics Council weighs into the debate with his essay, “Innovation for a well-fed world – what role for technology?

Andrew [Maynard] posed the question, “How should technology innovation contribute to life in the 21st century?”

For me, working on creating a well-fed world, the short answer is: in a way that supports a diverse, fair and sustainable food system in which everyone, everywhere can eat a healthy safe, culturally appropriate diet. For that to happen, we need a change of direction in which the key innovations needed are social, economic and political, not technological. And the question is:  what kind of technology, developed by whom, for whom, will help; who has what power to decide on what to do and to control it, who carries the risks and gets the benefits.

Take the debate on GM technology, for example. We in the Food Ethics Council … argue that instead of asking, ‘how can GM technology help secure global food supplies’, we need to ask ‘what can be done – by scientists but also by others – to help the world’s hungry?’…

Remember, too, that you do not have to have a correct scientific understanding of something to develop technologies that work, but sometimes we need a revolution in the history of science to conceive of new ways of engineering things – from Einstein’s insight that matter could be converted to energy, and Watson and Crick’s discovery of DNA and our understanding that life – and information – is digital and can be manipulated and re-engineered as such. That leads to new technological possibilities, as does nano-tech and synthetic biology – but all new technologies are generally over-hyped and invariably have unintended consequences. Indeed, global warming is the unintended consequence of a fossil- fuel driven industrial revolution…

In her essay, “Stop and Think: A Luddite Perspective“, Jennifer Sass, Senior Scientist at the Natural Resources Defense Council, makes some pained comments about technology and progress, before raising some specific concerns about nanotechnology:

Is there a role for technology in progressive social movements? Sure.

It wasn’t until the mechanization of cotton harvesting in the 1980’s that Missouri enacted compulsory education laws. New technology meant children were no longer needed in the field.

Lead wasn’t forced out of auto fuel when it was shown to destroy kid’s brains (known by the 1920s). It was removed when it was found to destroy catalytic converters introduced in the mid-1970’s. Technology not only saved future generations from leaded gasoline, but it reduced other harmful pollution from auto exhaust.

Nano-scale chemicals, intentionally designed to take advantage of unique properties at the small scale, are already offering social benefits, but at what costs?

Traditional treatment of hazardous waste sites is predominantly done with technologies such as carbon adsorption, chemical precipitation, filtration, steam, or bioremediation. Nanoremediation (can you believe there is already a new word for this?) can mean treatment with nanoscale metal oxides, carbon nanotubes, enzymes, or the already popular nanoscale zero-valent iron. The advantage is that the nano particles are more chemically-reactive and so may be designed to be more effective with less material…

But, what happens to the nanoparticles in the treated groundwater once they’ve completed their intended task? Do they just go away? Poof?

Carbon nanotubes are 100 times stronger than steel and six times lighter. Research to weave them into protective clothing is already underway, although nothing is on the market yet. Wearing a nano-carbon vest could make our soldiers bullet-proof, stab-proof, and still be light-weight.

But, what happens when the nanotubes are freed from the material, such as during the manufacturing of the textiles, fabrication of the clothing, or when it is damaged or destroyed in an explosion? Breathable nanotubes can be like asbestos fibers, causing deadly lung diseases.

If nano-scale elements are used extensively in electronics and computers, does this mean that most of the hazardous exposures associated with manufacturing and end-of-life stripping will fall to workers in the global south, whereas most of the advantages of improved technology will be reaped by the global north?

I’m not against new technologies per se. In fact, as a scientist I favor innovation. I love cool new stuff. But, will it make jobs more hazardous? Will it contaminate the environment? Will it contribute to social and economic injustices by distributing the risks and benefits unequally?…

Richard Owen, Chair in Environmental Risk Assessment at the University of Westminster, and Co-ordinator of the UK Environmental Nanoscience Initiative, raises some dark worries in his essay “A new era of responsible innovation“:

In 1956 one of my favourite films hit the big screen: a classic piece of science fiction called Forbidden Planet. It tells the story of a mission in the 23rd century to a distant planet, to find out what has happened to an earlier scientific expedition. On arrival the crew encounter the sole survivors, Dr Morbius and his daughter: the rest of the expedition has mysteriously disappeared. Morbius lives in a world of dazzling technology, the like of which the crew have never seen.

He had discovered the remnants of a highly advanced civilisation, the Krell, and an astonishing machine they had developed, the Plastic Educator. This could radically enhance their intellect, allowing them to materialise any thought, to develop new and wondrous technologies. Morbius had done the same. But something terrible had happened to the Krell: not only did the Plastic Educator develop their intellect, it also unwittingly heightened the darker sides of their subconscious minds, ‘Monsters from the Id’. In one night of savage destruction they were taken over by their own dark forces, leaving their advanced society extinct.

Now I’m not going to tell you how it ends; you’ll have to watch the film yourself. And it would be fanciful to say that we are heading for the same fate as the Krell. But it is fair to say that our relationship with innovation can at times be troublesome, with consequences that can on occasion be global in nature.

You may have heard for example of a clever financial innovation called ’securitisation’: you may also know that this has helped leave a legacy of toxic debt that all of us will play a part in cleaning up. This is dwarfed by the legacy that our relationship with fossil fuel burning technology will leave not only for our children, but also for their grandchildren. These examples show that it is important that we innovate, to drive our economy, to improve our lifestyles and wellbeing, to find solutions to the big issues we face – but it is critical that we innovate responsibly. And public demands to be responsible, to avoid excessive risks, go beyond banks: they also apply to research.

In his inaugural speech in January Barack Obama called for a ‘new era of responsibility’. I want to know what this new era will look like. For a number of years I worked for a regulator, the Environment Agency. I discovered that regulation is an incredibly powerful tool to promote responsible innovation, and there is no doubt that it will continue to play an important role. Development of policies and regulation, for new technologies for example, tends to be ‘evidence based’ – that is evidence is acquired to make the case for amending or bringing in new legislation, and here the research councils play an important role.

I’m fascinated by how this process works. Take for example nanotechnology, which has been described as the first industrial revolution of the 21st century. It’s small stuff, but big business, taking advantage of the fact that materials at the nanoscale (a billionth of a metre) can have fundamentally different properties compared to other (perhaps larger) forms of the same material. So while carbon nanotubes resemble tiny rolled-up sheets of graphite, they behave very differently – indeed, they have been called ‘the hottest thing in physics’.

Nanotechnology has a projected market value of many billions of pounds, potentially providing important solutions for renewable energy, healthcare, for the environment. But if these nanomaterials behave so differently, do they present greater risks, to the environment or to human health1? If so, do they need to be regulated differently? How do we balance economic growth with preventing harm to people and the environment?…

I’m convinced there is a way to link innovation with responsibility more efficiently, to make it more anticipatory. And I’ve been struck by how willing and open the people I have worked with at NERC, EPSRC and ESRC have been to consider these approaches. Maybe there is a silver lining in the black cloud of the recent financial chaos; maybe we are learning that responsible innovation is sustainable innovation, that it’s a good thing, and that a commitment to it will help build resilient and responsible economies. Maybe Barack Obama was right, maybe we are about to enter a new era of responsibility. I hope so.

The final essay in the series is “21st Century Tech Governance? What would Ned Ludd do?“, by Jim Thomas of the ETC Group:

What if we could drag emerging technologies into a modern court of public deliberation and democratic oversight. What might that look like?

I’ve been turning over that question for about 15 years now while active in global debates on emerging technologies –  particularly GM Crops, Nanotechnology, Synthetic Biology and  Geo-engineering – debates in which I’ve encountered the term Luddite, meant as a slur, more times than I care to count. Language like this tumbles carelessly out of history .. but I find the parallels striking. Once again we are in the early phases of a new industrial revolution. Once again powerful technologies (Converging Technologies) are physically remaking and sometimes disintegrating our societies. Those  of us in civil society carrying out bit-part campaigns, issuing press releases and launching legal challenges are in a sense attempting to drag technology governance away from the darkness of narrow expert committees and into the sunny court of public deliberation for a broader hearing.. It seems a perfectly reasonable and democratic urge. But there’s got to be a better and more systematic way to do that?

So far I’ve found three sets of proposals that might begin to put technology oversight into the open and back in the hands of a wider public:

1.) Public Engagement: Citizens Juries, Knowledge exchanges, People’s Commissions…

2.) Global Oversight: ICENT.

ICENT stands for the International Convention for the Evaluation of New Technologies – a UN level body for foresighting emerging technology trends and then applying a wide-ranging assessment process that will consider the social, environmental and justice implications of the innovation being scrutinised. It doesn’t exist yet and maybe it never will but at ETC Group we have dedicated a lot of time to imagining what such a body could look like (we even have some nifty organagrams – see pg 36-40 of this) For example there would be bodies scanning the technological horizon and others making a rough reckoning of whether a new technology needed a strong oversight framework or not…

3.) Popular assessment : Technopedia?

The only governance and regulations that work are those where somebody is paying attention – so  rather than hide technology assessment in rarefied committees why not hand it to the wisdom of the crowds. Wikipedia may not be the most perfectly accurate source of all knowledge but it is comprehensive, up to date and flexible and provides an interesting model. Actually Wikipedia entries are often not a bad place to start if you want to suss out the societal and environmental issues raised by the zeitgeist regarding new technologies. How about a dedicated wiki site for collaborative monitoring and judging of emerging technologies? Such a site could be structured so that, unlike the halls of power, marginal voices have a space and are welcome…

It’s good to see this range of spirited and thoughtful contributions to the debate about the future of technological innovation.  Of course, this is just the tip of a very large iceberg of discussion, happening all over the Internet.  The really hard question, perhaps, is what is the optimal method and location for this debate?  Jim Thomas’ suggestion of a new wiki has some merit – provided it could become an authoritative and definitive wiki on emerging technologies, that rises above the vast crowd of many existing websites. Is there already such a wiki in existence?

Advertisements

1 Comment »

  1. […] see this “iHuman” future as necessarily “inhuman”.  Technology, wisely deployed, can dramatically enhance human experience, at both the personal and societal level.  Rather than […]

    Pingback by iHuman or inhuman? -Humanity+ UK 2010 — 11 February 2010 @ 6:48 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: