dw2

7 January 2010

Mobiles manifesting AI

Filed under: AGI, Apple, futurist, intelligence, m2020, vision — David Wood @ 12:15 am

If you get lists from 37 different mobile industry analysts of “five game-changing mobile trends for the next decade“, how many overlaps will there be?  And will the most important ideas be found in the “bell” of the aggregated curve of predictions, or instead in the tails of the curve?

Of the 37 people who took part in the “m2020” exercise conducted by Rudy De Waele, I think I was the only person to mention either of the terms “AI” (Artificial Intelligence) or “PDA” (Personal Digital Assistant), as in the first of my five predictions for the 2010’s:

  • Mobiles manifesting AI – fulfilling, at last, the vision of “personal digital assistants”

However, there were some close matches:

  • Rich Wong predicted “Smart Agents 2.0 (thank you Patty Maes) become real; the ability to deduce/impute context from blend of usage and location data”;
  • Marshall Kirkpatrick predicted “Mobile content recommendation”;
  • Carlo Longino predicted “The mobile phone will evolve into an enabler device, carrying users’ digital identities, preferences and possessions around with them”;
  • Steve O’Hear predicted “People will share more and more personal information. Both explicit e.g. photo and video uploads or status updates, and implicit data. Location sharing via GPS (in the background) is one current example of implicit information that can be shared, but others include various sensory data captured automatically via the mobile phone e.g. weather, traffic and air quality conditions, health and fitness-related data, spending habits etc. Some of this information will be shared privately and one-to-one, some anonymously and in aggregate, and some increasingly made public or shared with a user’s wider social graph. Companies will provide incentives, both at the service level or financially, in exchange for users sharing various personal data”;
  • Robert Rice predicted “Artificial Life + Intelligent Agents (holographic personalities)”.

Of course, these predictions cover a spread of different ideas.  Here’s what I had in mind for mine:

  • Our mobile electronic companions will know more and more about us, and will be able to put that information to good use to assist us better;
  • For example, these companion devices will be able to make good recommendations (e.g. mobile content, or activities) for us, suggest corrections and improvements to what we are trying to do, and generally make us smarter all-round.

The idea is similar to what former CEO of Apple, John Sculley, often talked about, during his tenure with Apple.  From a history review article about the Newton PDA:

John Sculley, Apple’s CEO, had toyed with the idea of creating a Macintosh-killer in 1986. He commissioned two high budget video mockups of a product he called Knowledge Navigator. Knowledge Navigator was going to be a tablet the size of an opened magazine, and it would have very sophisticated artificial intelligence. The machine would anticipate your needs and act on them…

Sculley was enamored with Newton, especially Newton Intelligence, which allowed the software to anticipate the behavior of the user and act on those assumptions. For example, Newton would filter an AppleLink email, hyperlink all of the names to the address book, search the email for dates and times, and ask the user if it should schedule an event.

As we now know, the Apple Newton fell seriously short of expectation.  The performance of “intelligent assistance” became something of a joke.  However, there’s nothing wrong with the concept itself.  It just turned out to be a lot harder to implement than originally imagined.  The passage of time is bringing us closer to actual useful systems.

Many of the interfaces on desktop computers already show an intelligent understanding of what the user may be trying to accomplish:

  • Search bars frequently ask, “Did you mean to search for… instead of…?” when I misspell a search clue;
  • I’ve almost stopped browsing through my list of URL bookmarks; I just type a few characters into the URL bar and the web-browser lists websites it thinks I might be trying to find – including some from my bookmarks, some pages I visit often, and some pages I’ve visited recently;
  • It’s the same for finding a book on Amazon.com – the list of “incrementally matching books” can be very useful, even after only typing part of a book’s title;
  • And it’s the same using the Google search bar – the list of “suggested search phrases” contains, surprisingly often, something I want to click on;
  • The set of items shown in “context sensitve menus” often seems a much smarter fit to my needs, nowadays, than it did when the concept was first introduced.

On mobile, search is frequently further improved by subsetting results depending on location.  As another example, typing a few characters into the home screen of the Nokia E72 smartphone results in a list of possible actions for people whose contact details match what’s been typed.

Improving the user experience with increasingly complex mobile devices, therefore, will depend not just on clearer graphical interfaces (though that will help too), but on powerful search engines that are able to draw upon contextual information about the user and his/her purpose.

Over time, it’s likely that our mobile devices will be constantly carrying out background processing of clues, making sense of visual and audio data from the environment – including processing the stream of nearby spoken conversation.  With the right algorithms, and with powerful hardware capabilities – and provided issues of security and privacy are handled in a satisfactory way – our devices will fulfill more and more of the vision of being a “personal digital assistant”.

That’s part of what I mean when I describe the 2010’s as “the decade of nanotechnology and AI”.

6 January 2010

Mobile trends for the next decade

Filed under: futurist, m2020, smartphones, vision — David Wood @ 5:04 pm

A few days ago, I received an interesting invitation from mobile strategist and innovator Rudy De Waele:

It’s the end of the decade and for many of us it has been a very actively ‘mobile’ decade, a lot of the efforts and projects of our peers have become real and succesful during this decade.

As for the start of a new decade, I’ve had this idea of asking some of the people I met during the last decade  to write down their five game-changing mobile trends for the next decade.

The format is to list your 5 trends for the next decade, in words, a sentence or a pagaraph, no links.

It was a great question – especially the requirement to stick to just five trends.  Here’s the set which, after some thought, I emailed back to Rudy:

  1. Mobiles manifesting AI – fulfilling, at last, the vision of “personal digital assistants”.
  2. Powerful, easily wearable head-mounted accessories: audio, visual, and more.
  3. Mobiles as gateways into vivid virtual reality – present-day AR is just the beginning.
  4. Mobiles monitoring personal health – the second brains of our personal networks.
  5. Mobiles as universal remote controls for life – a conductor’s baton as much as a viewing portal.

No fewer than 37 different people from throughout the mobile and IT industries contributed answers.  The entire set of answers is now available for viewing on Rudy’s m-trends.org blog and is also posted onto slideshare.net, from where you can download a PDF version.

Each of the 37 sets of answers has at least one item (usually more!) that’s a good conversation starter.  The ongoing “#m2020” dialog that Rudy has started is likely to cast a long shadow.

Some of the predictions are very encouraging – like the set from Katrin Verclas covering mobiles in social development, transformation of politics (for example, in Africa), mobile payments, mobile healthcare, and mobile environmental monitoring.  Other sets of predictions foresee difficulties and backlashes as well as progress.  Some of the destruction foreseen could be counted as “creative destruction”, as in the prediction by Alan Moore:

The communications revolution accelerates, destroying businesses that refuse to think the unthinkable.

The predictions include many “first order effects” (technologies or products that people already foresee and desire, and which are already under development), but also several interesting comments on what Tom Hume calls “second order effects“.  Tom comments:

No-one predicted the loosening of time and space that Mimi Ito has noted. Similarly, what happens to our social arrangements when every photo can be face-recognised, geolocated and individuals tracked? What happens to shops when every price can be compared? What happens to conversation when it’s all recorded, or any fact is a 5-second voice-search away from being checked?

The full effects of ever-wider usage of mobile technology are, indeed, hard to predict – especially when we bear in mind the following forecast from Carlos Domingo:

Ubiquity of mobile broadband will lead to an explosion of connected devices (à la Kindle, not just phones) and M2M services (machines to machine services, without a human behind the device). In 10 year, there will be more devices/machines connected to the mobile network than humans.

In similar vein, Nicolas Nova predicts:

Non-humans (objects, animals, places) will generate more data than humans.

Mobile handsets will very likely look quite different, at the end of the decade, than they do at the beginning.  As Marek Pawlowski forecasts:

Keyboard dimensions and screen size cease to be the primary limiting factors in handset design as new input and display technologies free designers to radically change the form factor of personal communication devices.

I’ll end by sharing one of the predictions from Jonathan MacDonald, which seems to me particularly compelling:

Convergence of physical, augmented and virtual reality: augmented and virtual reality will become an increasingly standard method for search, discovery, gaming, eyesight, healthcare, retail, entertainment and most other experiences in life. Location and other contextual functions will grow so our 2D mobile experiences become 3D and ‘real’. To such an extent that the prefixes ‘augmented’ and ‘virtual’ will eventually become redundant.

The items I’ve picked out above are just scratching the surface.  There’s much, much more to read and ponder in the entire slideshow – click over to Rudy’s blog to explore further!

Rudy De Waele

2 January 2010

Vital for futurists: hacking the earth

Filed under: books, climate change, futurist, geoengineering — David Wood @ 1:16 am

Here’s a tip, for anyone seriously interested in the big issues that will dominate discussion in the next 5-10 years.  You should become familiar (if you’re not already) with the work of Jamais Cascio.  Jamais is someone who consistently has deep, interesting, and challenging things to say about the large changes that are likely to sweep over the planet in the decades ahead.

In 2003, Jamais co-founded WorldChanging.com, a website dedicated to finding and calling attention to models, tools and ideas for building a “bright green” future. In March, 2006, he started Open the Future.

One topic that Jamais has often addressed is geoengineering – sometimes also called “climate engineering”, “planetary engineering”, or “terraforming”.  Geoengineering covers a range of large-scale projects that could, conceivably, be deployed to head-off the effects of runaway global warming.  Examples include launching large mirrors into space to reflect sunlight away from the earth, injecting sulphate particles into the stratosphere, brightening clouds or deserts to increase their reflectivity, and extracting greenhouse gases from the atmosphere.  It’s a thoroughly controversial topic.  But Jamais treads skilfully and thoughtfully through the controversies.

A collection of essays by Jamais on the topic of geoengineering is available in book format, under the title “Hacking the earth: understanding the consequences of geoengineering“.  It’s a slim volume, with just over 100 pages, but it packs lots of big thoughts.  While reading, I found myself nodding in agreement throughout the book.

At present, this book is only available from Lulu.com.  As Jamais says, the book is, for him:

an experiment in self-publishing…

… in recent weeks various friends have tried out – and given high marks to – web-based self-publishing outfits like Lulu.com… I thought I’d give this method a shot.

The material in the book is derived from articles published online at Open the Future and elsewhere.  Some of the big themes are as follows (the following bullet points are all excerpts from Jamais’ writing):

  • Feedback effects ranging from methane released from melting permafrost to carbon emissions from decaying remnants of forests devoured by pine beetles risk boosting greenhouse gases faster than natural compensation mechanisms can handle.  The accumulation of non-linear drivers can lead to “tipping point” events causing functionally irreversible changes to geophysical systems (such as massive sea-level increases).  Some of these can have feedback effects of their own, such as the elimination of ice caps reducing global albedo, thereby accelerating heating.
  • None of the bright green solutions — ultra-efficient buildings and vehicles, top-to-bottom urban redesigns, local foods, renewable energy systems, and the like — will do anything to reduce the anthropogenic greenhouse gases that have already been emitted. The best result we get is stabilizing at an already high greenhouse gas level. And because of ocean thermal inertia and other big, slow climate effects, the Earth will continue to warm for a couple of decades even after we stop all greenhouse gas emissions. Transforming our civilization into a bright green wonderland won’t be easy, and under even the most optimistic estimates will take at least a decade; by the time we finally stop putting out additional greenhouse gases, we could well have gone past a point where globally disastrous results are inevitable. In fact, given the complexity of climate feedback systems, we may already have passed such a tipping point, even if we stopped all emissions today.
  • Geoengineering, should it be tried, would not be a replacement for making the economic, social, and technological changes needed to eliminate anthropogenic greenhouse gases. It would only be a way of giving us more time to make those changes. It’s not an either-or situation; geo is a last-ditch prop for making sure that we can do what needs to be done.
  • We don’t know enough about how the various geoengineering proposals would play out to make a persuasive case for trying any of them.  There needs to be far more study before making any even moderate-scale experimental effort. This is not something to try today. The most important task for current geoengineering research is to identify the approaches that might look attractive at first, but have devastating results — we need to know what we should avoid even if desperate.
  • Like it or not, we’ve entered the era of intentional geoengineering. The people who believe that (re)terraforming is a bad idea need to be part of the discussion about specific proposals, not simply sources of blanket condemnations. We need their insights and intelligence. The best way to make that happen, the best way to make sure that any terraforming effort leads to a global benefit, not harm, is to open the process of studying and developing geotechnological tools.
  • Geoengineering presents more than just an environmental question. It also presents a geopolitical dilemma. With processes of this magnitude and degree of uncertainty, countries would inevitably argue over control, costs, and liability for mistakes. More troubling, however, is the possibility that states may decide to use geoengineering efforts and technologies as weapons. Two factors make this a danger we dismiss at our peril: the unequal impact of climate changes, and the ability of small states and even nonstate actors to attempt geoengineering.
  • It is possible that, should the international community refrain from geoengineering strategies, one or more smaller, non-hegemonic, actors could undertake geoengineering projects of their own. This could be out of a legitimate fear that prevention and mitigation strategies would be insufficient, out of a disagreement with the consensus over geoengineering safety or results, or—most troublingly—out of a desire to use geoengineering tools to achieve a relative increase in competitive power over adversaries.

I particularly liked Jamais’ suggestion of a “Reversibility Principle” as an alternative to the “Precautionary Principle” and “Proactionary Principle” that have previously been suggested as guidelines for deciding which actions to take, regarding the application of technology.

Geoengineering is, by its nature, a huge topic.  The “Technology Review” magazine contains a substantial analysis entitled “The Geoengineering Gambit” in its Jan-Feb 2010 edition. And the authors of Freakonomics, Stephen J Dubner and Steven Levitt, included a chapter on geoengineering in their follow-up book, “Superfreakonomics“.  As it happens, there seems to be wide consensus that the freakonomics team were considerably too hasty in their analysis – see for example the Guardian article “Why Superfreakonomics’ authors are wrong on geo-engineering“.  But the fact that there were mistakes in that analysis doesn’t mean the topic itself should fade from view.

Far from it: I’m sure we’re going to be hearing more and more about geoengineering.  It deserves our attention!

28 December 2009

Wired’s top 7 mobile disruptions of 2009

Filed under: change, disruption, futurist — David Wood @ 9:00 pm

Wired.com today provide their list of the “top 7 disruptions” for mobile in 2009:

  1. Google Stack
  2. Mobile App Stores
  3. HTML5
  4. A New FCC
  5. Streaming Music
  6. The Real-Time Web
  7. Augmented Reality

You can read the details on Wired.com.  It’s a pretty good list: all the items included are important.

To nitpick, it’s not clear that they all count as “disruptive” rather than “evolutionary”.  And it seems at least some of the items are on the list because of what they’ll accomplish in 2010 rather than in 2009.   Never mind.

Wired asks: “What did we miss?”

With the same two provisos as before, I offer four additional candidates for inclusion:

1.) Mobile maps

Mobile maps seem to be getting better and better, and to be used more and more widely.  With 3D as well as 2D, with improved routing, and with plug-in integration from numerous third party apps and services, this trend is likely to continue.

2.) Mobile payments

From the perspective of the so-called developed world, use of mobile phones in payment transactions still seems relatively unexciting.  But from the perspective of the developing world – in countries where bank accounts and credit cards are comparatively scarce – mobile payments are already making a decisive difference.

3.) User Experience

For a while, technologists could tell themselves that good user experience (UX) was an optional extra, necessary for some mobile products but not for all.  This view is fading fast.  It’s now clear that users have become aware that good UX is possible on mobile – even for complex services – and they have an increasingly dim view of any mobile product that score weakly on UX.

4.) Open Source

It’s still early days for people to see the benefits of applying open source methods to creating mobile tools, applications, services, middleware, and (last but far from least) to improve the underlying platform.  But the transformational potential is enormous  – both in the improvements that end users will notice, and in the skillsets best suited to take advantage of the new innovation engine.  For further discussion of these points, see my recent presentation “Open ecosystems: a good thing?” (PDF) to the Cambridge Wireless network.

Alternative perspectives on 21C technology innovation and life

Filed under: futurist, innovation — David Wood @ 6:26 pm

While browsing through Andrew Maynard’s stimulating blog “2020 science: Providing a clear perspective on developing science and technology responsibly“, I’ve run across a fascinating series of articles, authored by 10 different guest bloggers, entitled

Technology innovation, life, and the 21st century – ten alternative perspectives

Andrew introduces the series as follows:

Life in the 21st century is going to depend increasingly on technology innovation.  Governments believe this.  Industry believes this.  Scientists believe this.  But is technology innovation really the solution to every challenge facing humanity, or have we got hooked on an innovation habit so deeply we don’t even see the addiction?  And even if it is important – essential even – who decides which innovations are nurtured and how they are used?

I must confess I’m a staunch believer in the importance of technology innovation.  But I was reminded recently that not everyone sees the world in the same way, and that there are very different but equally valid perspectives on how science and technology should be used within society.

As a result, I decided to commission ten guest blogs on technology innovation from people working for, associated with or generally reflecting the views of Civic Society groups.  The aim was twofold – to expose readers to perspectives on technology innovation that are sometimes drowned out in mainstream conversations, and to give a sense of the breadth of opinions and perspectives that are often lumped under the banners of “civic society” or “Non Government Organizations”…

Before I pull out some comments from these ten guest blogs, let me declare my own bias.

Briefly, it is my view that

  • Humanity in the 21st century is facing both enormous challenges and enormous opportunities;
  • Wise application of technology is the factor that will make the single biggest difference to successfully addressing these challenges and opportunities;
  • If we get things right, human experience all over the world in just a few decades time will be very substantially better than it is today;
  • Technology can be accelerated by commercial factors, such as the operation of free markets, but these forces need review, supervision, and regulation, to increase the likelihood that the outcomes are truly beneficial;
  • Technology can also be accelerated by supporting, educating, encouraging, inpsiring, and enabling larger numbers of skilled engineers worldwide, to work in open and collaborative ways;
  • At the same time as we involve more people in the development of technology, we should be involving more people in informed open deep debates about the management of the development of technology.

In other words, I am a strong optimist about the joint potential for technology, markets, engineers, and open collaboration, but I deeply fear allowing any one of these factors to become overly dominant.

Any view, for example, that “markets are always right” or that “technology will always be beneficial”, is a dangerous simplification.  The application of technology will only be “wise” (to return to the word I used above) if the powerful engines of market economics and technology development are overseen and directed by suitable review bodies, acting on behalf of society as a whole.  The manner of operation of these review bodies, in turn, needs to be widely debated.  In these debates, there can be nothing sacrosanct or above question.

The essays in the guest series on the 2020 science blog make some good contributions to these debates.   I don’t agree with all the points made, but the points all deserve a hearing.

The series starts with an essay “Biopolitics for the 21st Century” written by Marcy Darnovsky, Associate Executive Director of the Center for Genetics and Society.  Here are some extracts:

One challenge we face … is a tendency toward over-enthusiasm about prospective technologies. Another is the entanglement of technology innovation and commercial dynamics. Neither of these is brand new.

Back in the last century, the 1933 Chicago World’s Fair took “technological innovation” as its theme and “A Century of Progress” as its formal name. Its official motto was “Science Finds, Industry Applies, Man Conforms.”The slogan shamelessly depicts “science” and “industry” as dictator – or at least drill sergeant – of humanity. It anoints industrial science as a rightful decision-maker about human ends, and an inevitable purveyor of societal uplift.

Today the 1933 World’s Fair slogan seems altogether crass. But have we earned our cringe? We’d like to think that we’re more realistic about science and technology innovations. We want to believe that, in some collective sense, we’re in control of their broad direction. But are we less giddy about the techno-future now than we were back then?  Does technology innovation now serve human needs rather than the imperatives of commerce? Have we devised social and cultural innovations for shaping new technologies – do we have robust democratic mechanisms that encourage citizens and communities to participate meaningfully in decisions about their development, use and regulation?

I’m afraid that the habits of exaggerating the benefits of new technologies and minimizing their unwanted down sides are with us still…

Technology innovation is increasingly dominated by large-scale commercial imperatives.  Over the past century, and ever more so since the 1980 Bayh-Dole Act (an attempt to spur innovation by allowing publicly funded researchers to profit from their work), innovators have become scientist-entrepreneurs, and universities something akin to corporate incubators.

Commercial dynamics have become particularly influential in the biosciences. It’s hard to imagine any scientist today responding as Jonas Salk did in 1955, when he said with a straight face that “the people” own the polio vaccine. “There is no patent,” he told legendary news broadcaster Edward R. Murrow. “Could you patent the sun?”

Of course, entrepreneurial activity in technology and science often delivers important benefits. It can bring new discoveries and techniques to fruition quickly, and make them available rapidly. Some recent commercial technologies, most notably in digital communication and computing, are stunning indeed.

But how far have we come from the slogan of the 1933 World’s Fair? Technology developers still routinely present their plans either as “inevitable” or as crucial for economic growth. As for the rest of us, we have few opportunities to deliberate – especially as citizens, but also as consumers – about the risks as well as the benefits of technology innovations. Twenty-first century societies and communities too often wind up conforming to new technologies rather than finding ways to shape their goals and direction…

Georgia Miller, who coordinates Friends of the Earth Australia’s Nanotechnology Project, wrote an essay “Beyond safety: some bigger questions about new technologies” for the series.  The essay includes the following:

The promise that a given new technology will deliver environmentally benign electricity too cheap to meter, end hunger and poverty, or cure disease is very seductive. That is why the claims are made with many emerging technologies – nuclear power, biotechnology and nanotechnology, to name a few.

However history shows that such optimistic predictions are never achieved in reality. In addition to benefits, new technologies come with social, economic and environmental costs, and sometimes significant political implications.

Still, when it comes to public communication or policy making about nanotechnology, we’re often presented with the limited notion of weighing up predicted ‘benefits’ versus ‘risks’…

This framing ignores the broader costs and transformative potential of new technologies. It suggests that if we can only make nanotechnology ‘safe’, its development will necessarily deliver wealth, health, social opportunities and even environmental gains.

Ensuring technology safety is clearly very important. But simply assuming that ‘safe’ technology will deliver nothing but benefits, and that these benefits will be available to everyone, is – to put it mildly – quite optimistic.

To evaluate whether or not new technologies will help or hinder efforts to address the great ecological and social challenges of our time, we need to dig a little deeper…

Our experience also teaches us that environmentally or socially promising technologies will not necessarily be adopted, especially if they challenge the status quo. The government of Australia, one of the sunniest countries on earth, has pledged billions of dollars to cushion the coal industry from the effects of a proposed carbon trading system, while offering scant support to the fledgling solar energy sector.

There is a tendency to focus on the potential of new technologies to address our most pressing problems, rather than to seek better deployment of existing technologies, better design of existing systems, or changes in production and consumption. This reflects a preference to avoid systemic change. It also reflects an unfounded optimism that the ‘solution’ lies just over the horizon.

But sometimes ensuring better deployment of existing technologies is the most effective way to deal with a problem. Just as wider accessibility of existing drugs and medical treatments could prevent a huge number of deaths world-wide, improving urban storm water harvesting and re-use, housing insulation and mass transit public transport could go a long way to reducing our ecological footprint – potentially at a lower cost and at lower risk than mooted high tech options.

If evaluating the implementation or performance failures of previous technologies reveals economic or social obstacles or constraints, it’s probably these factors that warrant our attention. There is no reason to believe they will magically disappear once new technologies arrive…

Geoff Tansey of the Food Ethics Council weighs into the debate with his essay, “Innovation for a well-fed world – what role for technology?

Andrew [Maynard] posed the question, “How should technology innovation contribute to life in the 21st century?”

For me, working on creating a well-fed world, the short answer is: in a way that supports a diverse, fair and sustainable food system in which everyone, everywhere can eat a healthy safe, culturally appropriate diet. For that to happen, we need a change of direction in which the key innovations needed are social, economic and political, not technological. And the question is:  what kind of technology, developed by whom, for whom, will help; who has what power to decide on what to do and to control it, who carries the risks and gets the benefits.

Take the debate on GM technology, for example. We in the Food Ethics Council … argue that instead of asking, ‘how can GM technology help secure global food supplies’, we need to ask ‘what can be done – by scientists but also by others – to help the world’s hungry?’…

Remember, too, that you do not have to have a correct scientific understanding of something to develop technologies that work, but sometimes we need a revolution in the history of science to conceive of new ways of engineering things – from Einstein’s insight that matter could be converted to energy, and Watson and Crick’s discovery of DNA and our understanding that life – and information – is digital and can be manipulated and re-engineered as such. That leads to new technological possibilities, as does nano-tech and synthetic biology – but all new technologies are generally over-hyped and invariably have unintended consequences. Indeed, global warming is the unintended consequence of a fossil- fuel driven industrial revolution…

In her essay, “Stop and Think: A Luddite Perspective“, Jennifer Sass, Senior Scientist at the Natural Resources Defense Council, makes some pained comments about technology and progress, before raising some specific concerns about nanotechnology:

Is there a role for technology in progressive social movements? Sure.

It wasn’t until the mechanization of cotton harvesting in the 1980’s that Missouri enacted compulsory education laws. New technology meant children were no longer needed in the field.

Lead wasn’t forced out of auto fuel when it was shown to destroy kid’s brains (known by the 1920s). It was removed when it was found to destroy catalytic converters introduced in the mid-1970’s. Technology not only saved future generations from leaded gasoline, but it reduced other harmful pollution from auto exhaust.

Nano-scale chemicals, intentionally designed to take advantage of unique properties at the small scale, are already offering social benefits, but at what costs?

Traditional treatment of hazardous waste sites is predominantly done with technologies such as carbon adsorption, chemical precipitation, filtration, steam, or bioremediation. Nanoremediation (can you believe there is already a new word for this?) can mean treatment with nanoscale metal oxides, carbon nanotubes, enzymes, or the already popular nanoscale zero-valent iron. The advantage is that the nano particles are more chemically-reactive and so may be designed to be more effective with less material…

But, what happens to the nanoparticles in the treated groundwater once they’ve completed their intended task? Do they just go away? Poof?

Carbon nanotubes are 100 times stronger than steel and six times lighter. Research to weave them into protective clothing is already underway, although nothing is on the market yet. Wearing a nano-carbon vest could make our soldiers bullet-proof, stab-proof, and still be light-weight.

But, what happens when the nanotubes are freed from the material, such as during the manufacturing of the textiles, fabrication of the clothing, or when it is damaged or destroyed in an explosion? Breathable nanotubes can be like asbestos fibers, causing deadly lung diseases.

If nano-scale elements are used extensively in electronics and computers, does this mean that most of the hazardous exposures associated with manufacturing and end-of-life stripping will fall to workers in the global south, whereas most of the advantages of improved technology will be reaped by the global north?

I’m not against new technologies per se. In fact, as a scientist I favor innovation. I love cool new stuff. But, will it make jobs more hazardous? Will it contaminate the environment? Will it contribute to social and economic injustices by distributing the risks and benefits unequally?…

Richard Owen, Chair in Environmental Risk Assessment at the University of Westminster, and Co-ordinator of the UK Environmental Nanoscience Initiative, raises some dark worries in his essay “A new era of responsible innovation“:

In 1956 one of my favourite films hit the big screen: a classic piece of science fiction called Forbidden Planet. It tells the story of a mission in the 23rd century to a distant planet, to find out what has happened to an earlier scientific expedition. On arrival the crew encounter the sole survivors, Dr Morbius and his daughter: the rest of the expedition has mysteriously disappeared. Morbius lives in a world of dazzling technology, the like of which the crew have never seen.

He had discovered the remnants of a highly advanced civilisation, the Krell, and an astonishing machine they had developed, the Plastic Educator. This could radically enhance their intellect, allowing them to materialise any thought, to develop new and wondrous technologies. Morbius had done the same. But something terrible had happened to the Krell: not only did the Plastic Educator develop their intellect, it also unwittingly heightened the darker sides of their subconscious minds, ‘Monsters from the Id’. In one night of savage destruction they were taken over by their own dark forces, leaving their advanced society extinct.

Now I’m not going to tell you how it ends; you’ll have to watch the film yourself. And it would be fanciful to say that we are heading for the same fate as the Krell. But it is fair to say that our relationship with innovation can at times be troublesome, with consequences that can on occasion be global in nature.

You may have heard for example of a clever financial innovation called ’securitisation’: you may also know that this has helped leave a legacy of toxic debt that all of us will play a part in cleaning up. This is dwarfed by the legacy that our relationship with fossil fuel burning technology will leave not only for our children, but also for their grandchildren. These examples show that it is important that we innovate, to drive our economy, to improve our lifestyles and wellbeing, to find solutions to the big issues we face – but it is critical that we innovate responsibly. And public demands to be responsible, to avoid excessive risks, go beyond banks: they also apply to research.

In his inaugural speech in January Barack Obama called for a ‘new era of responsibility’. I want to know what this new era will look like. For a number of years I worked for a regulator, the Environment Agency. I discovered that regulation is an incredibly powerful tool to promote responsible innovation, and there is no doubt that it will continue to play an important role. Development of policies and regulation, for new technologies for example, tends to be ‘evidence based’ – that is evidence is acquired to make the case for amending or bringing in new legislation, and here the research councils play an important role.

I’m fascinated by how this process works. Take for example nanotechnology, which has been described as the first industrial revolution of the 21st century. It’s small stuff, but big business, taking advantage of the fact that materials at the nanoscale (a billionth of a metre) can have fundamentally different properties compared to other (perhaps larger) forms of the same material. So while carbon nanotubes resemble tiny rolled-up sheets of graphite, they behave very differently – indeed, they have been called ‘the hottest thing in physics’.

Nanotechnology has a projected market value of many billions of pounds, potentially providing important solutions for renewable energy, healthcare, for the environment. But if these nanomaterials behave so differently, do they present greater risks, to the environment or to human health1? If so, do they need to be regulated differently? How do we balance economic growth with preventing harm to people and the environment?…

I’m convinced there is a way to link innovation with responsibility more efficiently, to make it more anticipatory. And I’ve been struck by how willing and open the people I have worked with at NERC, EPSRC and ESRC have been to consider these approaches. Maybe there is a silver lining in the black cloud of the recent financial chaos; maybe we are learning that responsible innovation is sustainable innovation, that it’s a good thing, and that a commitment to it will help build resilient and responsible economies. Maybe Barack Obama was right, maybe we are about to enter a new era of responsibility. I hope so.

The final essay in the series is “21st Century Tech Governance? What would Ned Ludd do?“, by Jim Thomas of the ETC Group:

What if we could drag emerging technologies into a modern court of public deliberation and democratic oversight. What might that look like?

I’ve been turning over that question for about 15 years now while active in global debates on emerging technologies –  particularly GM Crops, Nanotechnology, Synthetic Biology and  Geo-engineering – debates in which I’ve encountered the term Luddite, meant as a slur, more times than I care to count. Language like this tumbles carelessly out of history .. but I find the parallels striking. Once again we are in the early phases of a new industrial revolution. Once again powerful technologies (Converging Technologies) are physically remaking and sometimes disintegrating our societies. Those  of us in civil society carrying out bit-part campaigns, issuing press releases and launching legal challenges are in a sense attempting to drag technology governance away from the darkness of narrow expert committees and into the sunny court of public deliberation for a broader hearing.. It seems a perfectly reasonable and democratic urge. But there’s got to be a better and more systematic way to do that?

So far I’ve found three sets of proposals that might begin to put technology oversight into the open and back in the hands of a wider public:

1.) Public Engagement: Citizens Juries, Knowledge exchanges, People’s Commissions…

2.) Global Oversight: ICENT.

ICENT stands for the International Convention for the Evaluation of New Technologies – a UN level body for foresighting emerging technology trends and then applying a wide-ranging assessment process that will consider the social, environmental and justice implications of the innovation being scrutinised. It doesn’t exist yet and maybe it never will but at ETC Group we have dedicated a lot of time to imagining what such a body could look like (we even have some nifty organagrams – see pg 36-40 of this) For example there would be bodies scanning the technological horizon and others making a rough reckoning of whether a new technology needed a strong oversight framework or not…

3.) Popular assessment : Technopedia?

The only governance and regulations that work are those where somebody is paying attention – so  rather than hide technology assessment in rarefied committees why not hand it to the wisdom of the crowds. Wikipedia may not be the most perfectly accurate source of all knowledge but it is comprehensive, up to date and flexible and provides an interesting model. Actually Wikipedia entries are often not a bad place to start if you want to suss out the societal and environmental issues raised by the zeitgeist regarding new technologies. How about a dedicated wiki site for collaborative monitoring and judging of emerging technologies? Such a site could be structured so that, unlike the halls of power, marginal voices have a space and are welcome…

It’s good to see this range of spirited and thoughtful contributions to the debate about the future of technological innovation.  Of course, this is just the tip of a very large iceberg of discussion, happening all over the Internet.  The really hard question, perhaps, is what is the optimal method and location for this debate?  Jim Thomas’ suggestion of a new wiki has some merit – provided it could become an authoritative and definitive wiki on emerging technologies, that rises above the vast crowd of many existing websites. Is there already such a wiki in existence?

20 July 2009

An engaging family-friendly vision of the future

Filed under: books, cryonics, futurist — David Wood @ 7:59 pm

When I was around 11-15 years old, I devoured almost all the science fiction books in the local village library. The experience not only inspired me and stretched my imagination, but pre-disposed me to be open-minded about possible large impacts by technology on how life would be lived in the future.

Much of the technology that will have the biggest impact on the 21st century remains as yet undiscovered. Some of these discoveries will, presumably, be made by people who are currently still children. My hope is that these children will take interest in the kinds of ideas that permeate Shannon Vyff’s fine book “21st century kids: a trip from the future to you”.

The majority of the action in this book is set 180 years in the future – although there are several loop-backs to the present day. Here are just a few of the themes that are woven together in this fast-moving book:

  • Cryonic suspension, and the problems of eventual re-animation;
  • Brain implants, that enable a kind of telepathic communication;
  • Implications if human brains and human bodies could be dramatically improved;
  • Options for improving the brains of other animal species, even to the point of enabling rich communications between these creatures and humans;
  • Humans co-existing with self-aware robots and other AIs;
  • Friendly versus unfriendly AI;
  • Transferring human consciousness into robots (and back again);
  • Coping with the drawbacks of environmental degradation;
  • Future modes of manufacturing, transport, recreation, education, and religion;
  • Circumstances in which alien civilisations might take an active interest in developments on the Earth.

Adults can enjoy reading “21st century kids”, but there are parts of the book that speak more directly to children as the primary intended readership. Since I’ve long left my own adolescent days behind, I’m not able to fully judge the likely reactions of that target audience. My expectation is that many of them will find the contents engaging, thought-provoking, and exciting. It’s family-friendly throughout.
One unusual aspect of the book is that several of the main characters have the same names (and early life histories) as three of the author’s own children: Avianna, Avryn, and Avalyse. The author herself features in the book, as the (unnamed) “Mom”. I found this occasionally unsettling, but it adds to the book’s vividness and immediacy.

As regards the vision the book paints of the future, it’s certainly possible to take issue with some of the details. However, the bigger picture is that the book is sufficiently interesting that it is highly likely to provoke a lot of valuable debate and discussion. Hopefully it will stretch the imagination of many potential future technologists and engineers, and inspire them to keep an open mind about what innovative technology can accomplish.

10 April 2009

The future: neuroengineering and virtual minds

Filed under: books, futurist, neuroengineering — David Wood @ 8:25 pm

Because things have been so absorbing and demanding at work, during the setup phase of the Symbian Foundation, I’ve had little time over the last few months for a couple of activities that I usually greatly enjoy.

First, I’ve had little time to write articles for this blog (my personal blog). Any time and energy that I’ve had available for blogging has tended to go, instead, to postings in my work blog. For example, over the last fortnight I’ve written work-related postings entitled A new software journey, Collaboration at the heart, The first hardware reference design, Who wants to join a movement?, and Simpler and cleaner code. In principle, this blog here is for more personal reflections, and for matters removed from my day-to-day work responsibilities.

Second, I’ve had little time to read books. Last year, I probably finished on average at least one book and/or audio-book every two weeks. This year, so far, I’ve only made it to the end of one book: Darwin’s Cathedral: Evolution, Religion, and the Nature of Society, by David Sloan Wilson. (It’s a fine book, which is both intellectually challenging and intellectually satisfying, and which also happens to be very relevant to the ongoing debates over “the new atheism”. My review of it can be found on the LivingSocial site.)

However, earlier today, in the course of a long flight, I took the time to open a book I’ve been carrying with me on several previous trips, and I made a good start on it. From what I’ve read so far, it already seems clear to me that this is a tremendous piece of work, about a field that deserves a significant increase in attention. The author is Bruce F. Katz, adjunct professor at Drexel University, and Chief Artificial Intelligence Scientist at ColdLight. The book is Neuroengineering the future: virtual minds and the creation of immortality.

Wikipedia gives the following definition of the term “Neuroengineering”:

Neural engineering also known as Neuroengineering is a discipline that uses engineering techniques to understand, repair, replace, enhance, or treat the diseases of neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs… Prominent goals in the field include restoration and augmentation of human function via direct interactions between the nervous system and artificial devices.

That’s an ambitious set of goals, but Bruce sets out an even grander vision. To give a flavour, here’s an extract from the Preface of his book:

I am not the first, and certainly will not be the last, to stress the importance of coming developments in neural engineering. This field has all the hallmarks of a broad technological revolution, but larger in scope and with deeper tentacles than those accompanying both computers and the Internet…

To modify the brain is to modify not only how we perceive but what we are, our consciousnesses and our identities. The power to be able to do so cannot be over-stated, and the consequences can scarcely be imagined, especially with our current unmodified evolutionarily provided mental apparatuses…

Here are just a few topics that we will cover…

  1. Brain-machine interfaces to control computers, exoskeletons, robots, and other devices with thought alone;
  2. Mind-reading devices that will project the conscious contents of one’s brain onto a screen as if it was a movie;
  3. Devices to enhance intellectual ability and to increase concentration;
  4. Devices to enhance creativity and insight;
  5. Mechanisms to upload the mind to a machine, thus preserving it from bodily decay and bodily death.

Other writers have addressed these topics before – both in science fiction and in technology review books. But it looks to me that Bruce brings a greater level of rigour and a wider set of up-to-date research information. To continue quoting from the Preface:

The book is divided into three sections:

  1. The first develops the neurophysiological as well as philosophical foundations on which these advances may be made;
  2. The second describes the current state of the art, and neuroengineering developments that will be with us in the near term;
  3. The final part of the book speculates on what will happen in the long-term, and what it will be like to be a post-evolutionary entity…

The futurist will naturally be drawn to the final section, but in their case it is all the more imperative that the initial development be mastered, especially the chapters with a philosophical bent. The uploading of the soul to a chine is not just a matter of creating the proper technology; it is first and foremost figuring out what it means to have a soul…

As an unabashed futurist, I’m greatly looking forward to finding more time (somehow!) to read further into this book!

« Newer Posts

Blog at WordPress.com.