dw2

28 December 2009

Wired’s top 7 mobile disruptions of 2009

Filed under: change, disruption, futurist — David Wood @ 9:00 pm

Wired.com today provide their list of the “top 7 disruptions” for mobile in 2009:

  1. Google Stack
  2. Mobile App Stores
  3. HTML5
  4. A New FCC
  5. Streaming Music
  6. The Real-Time Web
  7. Augmented Reality

You can read the details on Wired.com.  It’s a pretty good list: all the items included are important.

To nitpick, it’s not clear that they all count as “disruptive” rather than “evolutionary”.  And it seems at least some of the items are on the list because of what they’ll accomplish in 2010 rather than in 2009.   Never mind.

Wired asks: “What did we miss?”

With the same two provisos as before, I offer four additional candidates for inclusion:

1.) Mobile maps

Mobile maps seem to be getting better and better, and to be used more and more widely.  With 3D as well as 2D, with improved routing, and with plug-in integration from numerous third party apps and services, this trend is likely to continue.

2.) Mobile payments

From the perspective of the so-called developed world, use of mobile phones in payment transactions still seems relatively unexciting.  But from the perspective of the developing world – in countries where bank accounts and credit cards are comparatively scarce – mobile payments are already making a decisive difference.

3.) User Experience

For a while, technologists could tell themselves that good user experience (UX) was an optional extra, necessary for some mobile products but not for all.  This view is fading fast.  It’s now clear that users have become aware that good UX is possible on mobile – even for complex services – and they have an increasingly dim view of any mobile product that score weakly on UX.

4.) Open Source

It’s still early days for people to see the benefits of applying open source methods to creating mobile tools, applications, services, middleware, and (last but far from least) to improve the underlying platform.  But the transformational potential is enormous  – both in the improvements that end users will notice, and in the skillsets best suited to take advantage of the new innovation engine.  For further discussion of these points, see my recent presentation “Open ecosystems: a good thing?” (PDF) to the Cambridge Wireless network.

Alternative perspectives on 21C technology innovation and life

Filed under: futurist, innovation — David Wood @ 6:26 pm

While browsing through Andrew Maynard’s stimulating blog “2020 science: Providing a clear perspective on developing science and technology responsibly“, I’ve run across a fascinating series of articles, authored by 10 different guest bloggers, entitled

Technology innovation, life, and the 21st century – ten alternative perspectives

Andrew introduces the series as follows:

Life in the 21st century is going to depend increasingly on technology innovation.  Governments believe this.  Industry believes this.  Scientists believe this.  But is technology innovation really the solution to every challenge facing humanity, or have we got hooked on an innovation habit so deeply we don’t even see the addiction?  And even if it is important – essential even – who decides which innovations are nurtured and how they are used?

I must confess I’m a staunch believer in the importance of technology innovation.  But I was reminded recently that not everyone sees the world in the same way, and that there are very different but equally valid perspectives on how science and technology should be used within society.

As a result, I decided to commission ten guest blogs on technology innovation from people working for, associated with or generally reflecting the views of Civic Society groups.  The aim was twofold – to expose readers to perspectives on technology innovation that are sometimes drowned out in mainstream conversations, and to give a sense of the breadth of opinions and perspectives that are often lumped under the banners of “civic society” or “Non Government Organizations”…

Before I pull out some comments from these ten guest blogs, let me declare my own bias.

Briefly, it is my view that

  • Humanity in the 21st century is facing both enormous challenges and enormous opportunities;
  • Wise application of technology is the factor that will make the single biggest difference to successfully addressing these challenges and opportunities;
  • If we get things right, human experience all over the world in just a few decades time will be very substantially better than it is today;
  • Technology can be accelerated by commercial factors, such as the operation of free markets, but these forces need review, supervision, and regulation, to increase the likelihood that the outcomes are truly beneficial;
  • Technology can also be accelerated by supporting, educating, encouraging, inpsiring, and enabling larger numbers of skilled engineers worldwide, to work in open and collaborative ways;
  • At the same time as we involve more people in the development of technology, we should be involving more people in informed open deep debates about the management of the development of technology.

In other words, I am a strong optimist about the joint potential for technology, markets, engineers, and open collaboration, but I deeply fear allowing any one of these factors to become overly dominant.

Any view, for example, that “markets are always right” or that “technology will always be beneficial”, is a dangerous simplification.  The application of technology will only be “wise” (to return to the word I used above) if the powerful engines of market economics and technology development are overseen and directed by suitable review bodies, acting on behalf of society as a whole.  The manner of operation of these review bodies, in turn, needs to be widely debated.  In these debates, there can be nothing sacrosanct or above question.

The essays in the guest series on the 2020 science blog make some good contributions to these debates.   I don’t agree with all the points made, but the points all deserve a hearing.

The series starts with an essay “Biopolitics for the 21st Century” written by Marcy Darnovsky, Associate Executive Director of the Center for Genetics and Society.  Here are some extracts:

One challenge we face … is a tendency toward over-enthusiasm about prospective technologies. Another is the entanglement of technology innovation and commercial dynamics. Neither of these is brand new.

Back in the last century, the 1933 Chicago World’s Fair took “technological innovation” as its theme and “A Century of Progress” as its formal name. Its official motto was “Science Finds, Industry Applies, Man Conforms.”The slogan shamelessly depicts “science” and “industry” as dictator – or at least drill sergeant – of humanity. It anoints industrial science as a rightful decision-maker about human ends, and an inevitable purveyor of societal uplift.

Today the 1933 World’s Fair slogan seems altogether crass. But have we earned our cringe? We’d like to think that we’re more realistic about science and technology innovations. We want to believe that, in some collective sense, we’re in control of their broad direction. But are we less giddy about the techno-future now than we were back then?  Does technology innovation now serve human needs rather than the imperatives of commerce? Have we devised social and cultural innovations for shaping new technologies – do we have robust democratic mechanisms that encourage citizens and communities to participate meaningfully in decisions about their development, use and regulation?

I’m afraid that the habits of exaggerating the benefits of new technologies and minimizing their unwanted down sides are with us still…

Technology innovation is increasingly dominated by large-scale commercial imperatives.  Over the past century, and ever more so since the 1980 Bayh-Dole Act (an attempt to spur innovation by allowing publicly funded researchers to profit from their work), innovators have become scientist-entrepreneurs, and universities something akin to corporate incubators.

Commercial dynamics have become particularly influential in the biosciences. It’s hard to imagine any scientist today responding as Jonas Salk did in 1955, when he said with a straight face that “the people” own the polio vaccine. “There is no patent,” he told legendary news broadcaster Edward R. Murrow. “Could you patent the sun?”

Of course, entrepreneurial activity in technology and science often delivers important benefits. It can bring new discoveries and techniques to fruition quickly, and make them available rapidly. Some recent commercial technologies, most notably in digital communication and computing, are stunning indeed.

But how far have we come from the slogan of the 1933 World’s Fair? Technology developers still routinely present their plans either as “inevitable” or as crucial for economic growth. As for the rest of us, we have few opportunities to deliberate – especially as citizens, but also as consumers – about the risks as well as the benefits of technology innovations. Twenty-first century societies and communities too often wind up conforming to new technologies rather than finding ways to shape their goals and direction…

Georgia Miller, who coordinates Friends of the Earth Australia’s Nanotechnology Project, wrote an essay “Beyond safety: some bigger questions about new technologies” for the series.  The essay includes the following:

The promise that a given new technology will deliver environmentally benign electricity too cheap to meter, end hunger and poverty, or cure disease is very seductive. That is why the claims are made with many emerging technologies – nuclear power, biotechnology and nanotechnology, to name a few.

However history shows that such optimistic predictions are never achieved in reality. In addition to benefits, new technologies come with social, economic and environmental costs, and sometimes significant political implications.

Still, when it comes to public communication or policy making about nanotechnology, we’re often presented with the limited notion of weighing up predicted ‘benefits’ versus ‘risks’…

This framing ignores the broader costs and transformative potential of new technologies. It suggests that if we can only make nanotechnology ‘safe’, its development will necessarily deliver wealth, health, social opportunities and even environmental gains.

Ensuring technology safety is clearly very important. But simply assuming that ‘safe’ technology will deliver nothing but benefits, and that these benefits will be available to everyone, is – to put it mildly – quite optimistic.

To evaluate whether or not new technologies will help or hinder efforts to address the great ecological and social challenges of our time, we need to dig a little deeper…

Our experience also teaches us that environmentally or socially promising technologies will not necessarily be adopted, especially if they challenge the status quo. The government of Australia, one of the sunniest countries on earth, has pledged billions of dollars to cushion the coal industry from the effects of a proposed carbon trading system, while offering scant support to the fledgling solar energy sector.

There is a tendency to focus on the potential of new technologies to address our most pressing problems, rather than to seek better deployment of existing technologies, better design of existing systems, or changes in production and consumption. This reflects a preference to avoid systemic change. It also reflects an unfounded optimism that the ‘solution’ lies just over the horizon.

But sometimes ensuring better deployment of existing technologies is the most effective way to deal with a problem. Just as wider accessibility of existing drugs and medical treatments could prevent a huge number of deaths world-wide, improving urban storm water harvesting and re-use, housing insulation and mass transit public transport could go a long way to reducing our ecological footprint – potentially at a lower cost and at lower risk than mooted high tech options.

If evaluating the implementation or performance failures of previous technologies reveals economic or social obstacles or constraints, it’s probably these factors that warrant our attention. There is no reason to believe they will magically disappear once new technologies arrive…

Geoff Tansey of the Food Ethics Council weighs into the debate with his essay, “Innovation for a well-fed world – what role for technology?

Andrew [Maynard] posed the question, “How should technology innovation contribute to life in the 21st century?”

For me, working on creating a well-fed world, the short answer is: in a way that supports a diverse, fair and sustainable food system in which everyone, everywhere can eat a healthy safe, culturally appropriate diet. For that to happen, we need a change of direction in which the key innovations needed are social, economic and political, not technological. And the question is:  what kind of technology, developed by whom, for whom, will help; who has what power to decide on what to do and to control it, who carries the risks and gets the benefits.

Take the debate on GM technology, for example. We in the Food Ethics Council … argue that instead of asking, ‘how can GM technology help secure global food supplies’, we need to ask ‘what can be done – by scientists but also by others – to help the world’s hungry?’…

Remember, too, that you do not have to have a correct scientific understanding of something to develop technologies that work, but sometimes we need a revolution in the history of science to conceive of new ways of engineering things – from Einstein’s insight that matter could be converted to energy, and Watson and Crick’s discovery of DNA and our understanding that life – and information – is digital and can be manipulated and re-engineered as such. That leads to new technological possibilities, as does nano-tech and synthetic biology – but all new technologies are generally over-hyped and invariably have unintended consequences. Indeed, global warming is the unintended consequence of a fossil- fuel driven industrial revolution…

In her essay, “Stop and Think: A Luddite Perspective“, Jennifer Sass, Senior Scientist at the Natural Resources Defense Council, makes some pained comments about technology and progress, before raising some specific concerns about nanotechnology:

Is there a role for technology in progressive social movements? Sure.

It wasn’t until the mechanization of cotton harvesting in the 1980’s that Missouri enacted compulsory education laws. New technology meant children were no longer needed in the field.

Lead wasn’t forced out of auto fuel when it was shown to destroy kid’s brains (known by the 1920s). It was removed when it was found to destroy catalytic converters introduced in the mid-1970’s. Technology not only saved future generations from leaded gasoline, but it reduced other harmful pollution from auto exhaust.

Nano-scale chemicals, intentionally designed to take advantage of unique properties at the small scale, are already offering social benefits, but at what costs?

Traditional treatment of hazardous waste sites is predominantly done with technologies such as carbon adsorption, chemical precipitation, filtration, steam, or bioremediation. Nanoremediation (can you believe there is already a new word for this?) can mean treatment with nanoscale metal oxides, carbon nanotubes, enzymes, or the already popular nanoscale zero-valent iron. The advantage is that the nano particles are more chemically-reactive and so may be designed to be more effective with less material…

But, what happens to the nanoparticles in the treated groundwater once they’ve completed their intended task? Do they just go away? Poof?

Carbon nanotubes are 100 times stronger than steel and six times lighter. Research to weave them into protective clothing is already underway, although nothing is on the market yet. Wearing a nano-carbon vest could make our soldiers bullet-proof, stab-proof, and still be light-weight.

But, what happens when the nanotubes are freed from the material, such as during the manufacturing of the textiles, fabrication of the clothing, or when it is damaged or destroyed in an explosion? Breathable nanotubes can be like asbestos fibers, causing deadly lung diseases.

If nano-scale elements are used extensively in electronics and computers, does this mean that most of the hazardous exposures associated with manufacturing and end-of-life stripping will fall to workers in the global south, whereas most of the advantages of improved technology will be reaped by the global north?

I’m not against new technologies per se. In fact, as a scientist I favor innovation. I love cool new stuff. But, will it make jobs more hazardous? Will it contaminate the environment? Will it contribute to social and economic injustices by distributing the risks and benefits unequally?…

Richard Owen, Chair in Environmental Risk Assessment at the University of Westminster, and Co-ordinator of the UK Environmental Nanoscience Initiative, raises some dark worries in his essay “A new era of responsible innovation“:

In 1956 one of my favourite films hit the big screen: a classic piece of science fiction called Forbidden Planet. It tells the story of a mission in the 23rd century to a distant planet, to find out what has happened to an earlier scientific expedition. On arrival the crew encounter the sole survivors, Dr Morbius and his daughter: the rest of the expedition has mysteriously disappeared. Morbius lives in a world of dazzling technology, the like of which the crew have never seen.

He had discovered the remnants of a highly advanced civilisation, the Krell, and an astonishing machine they had developed, the Plastic Educator. This could radically enhance their intellect, allowing them to materialise any thought, to develop new and wondrous technologies. Morbius had done the same. But something terrible had happened to the Krell: not only did the Plastic Educator develop their intellect, it also unwittingly heightened the darker sides of their subconscious minds, ‘Monsters from the Id’. In one night of savage destruction they were taken over by their own dark forces, leaving their advanced society extinct.

Now I’m not going to tell you how it ends; you’ll have to watch the film yourself. And it would be fanciful to say that we are heading for the same fate as the Krell. But it is fair to say that our relationship with innovation can at times be troublesome, with consequences that can on occasion be global in nature.

You may have heard for example of a clever financial innovation called ’securitisation’: you may also know that this has helped leave a legacy of toxic debt that all of us will play a part in cleaning up. This is dwarfed by the legacy that our relationship with fossil fuel burning technology will leave not only for our children, but also for their grandchildren. These examples show that it is important that we innovate, to drive our economy, to improve our lifestyles and wellbeing, to find solutions to the big issues we face – but it is critical that we innovate responsibly. And public demands to be responsible, to avoid excessive risks, go beyond banks: they also apply to research.

In his inaugural speech in January Barack Obama called for a ‘new era of responsibility’. I want to know what this new era will look like. For a number of years I worked for a regulator, the Environment Agency. I discovered that regulation is an incredibly powerful tool to promote responsible innovation, and there is no doubt that it will continue to play an important role. Development of policies and regulation, for new technologies for example, tends to be ‘evidence based’ – that is evidence is acquired to make the case for amending or bringing in new legislation, and here the research councils play an important role.

I’m fascinated by how this process works. Take for example nanotechnology, which has been described as the first industrial revolution of the 21st century. It’s small stuff, but big business, taking advantage of the fact that materials at the nanoscale (a billionth of a metre) can have fundamentally different properties compared to other (perhaps larger) forms of the same material. So while carbon nanotubes resemble tiny rolled-up sheets of graphite, they behave very differently – indeed, they have been called ‘the hottest thing in physics’.

Nanotechnology has a projected market value of many billions of pounds, potentially providing important solutions for renewable energy, healthcare, for the environment. But if these nanomaterials behave so differently, do they present greater risks, to the environment or to human health1? If so, do they need to be regulated differently? How do we balance economic growth with preventing harm to people and the environment?…

I’m convinced there is a way to link innovation with responsibility more efficiently, to make it more anticipatory. And I’ve been struck by how willing and open the people I have worked with at NERC, EPSRC and ESRC have been to consider these approaches. Maybe there is a silver lining in the black cloud of the recent financial chaos; maybe we are learning that responsible innovation is sustainable innovation, that it’s a good thing, and that a commitment to it will help build resilient and responsible economies. Maybe Barack Obama was right, maybe we are about to enter a new era of responsibility. I hope so.

The final essay in the series is “21st Century Tech Governance? What would Ned Ludd do?“, by Jim Thomas of the ETC Group:

What if we could drag emerging technologies into a modern court of public deliberation and democratic oversight. What might that look like?

I’ve been turning over that question for about 15 years now while active in global debates on emerging technologies –  particularly GM Crops, Nanotechnology, Synthetic Biology and  Geo-engineering – debates in which I’ve encountered the term Luddite, meant as a slur, more times than I care to count. Language like this tumbles carelessly out of history .. but I find the parallels striking. Once again we are in the early phases of a new industrial revolution. Once again powerful technologies (Converging Technologies) are physically remaking and sometimes disintegrating our societies. Those  of us in civil society carrying out bit-part campaigns, issuing press releases and launching legal challenges are in a sense attempting to drag technology governance away from the darkness of narrow expert committees and into the sunny court of public deliberation for a broader hearing.. It seems a perfectly reasonable and democratic urge. But there’s got to be a better and more systematic way to do that?

So far I’ve found three sets of proposals that might begin to put technology oversight into the open and back in the hands of a wider public:

1.) Public Engagement: Citizens Juries, Knowledge exchanges, People’s Commissions…

2.) Global Oversight: ICENT.

ICENT stands for the International Convention for the Evaluation of New Technologies – a UN level body for foresighting emerging technology trends and then applying a wide-ranging assessment process that will consider the social, environmental and justice implications of the innovation being scrutinised. It doesn’t exist yet and maybe it never will but at ETC Group we have dedicated a lot of time to imagining what such a body could look like (we even have some nifty organagrams – see pg 36-40 of this) For example there would be bodies scanning the technological horizon and others making a rough reckoning of whether a new technology needed a strong oversight framework or not…

3.) Popular assessment : Technopedia?

The only governance and regulations that work are those where somebody is paying attention – so  rather than hide technology assessment in rarefied committees why not hand it to the wisdom of the crowds. Wikipedia may not be the most perfectly accurate source of all knowledge but it is comprehensive, up to date and flexible and provides an interesting model. Actually Wikipedia entries are often not a bad place to start if you want to suss out the societal and environmental issues raised by the zeitgeist regarding new technologies. How about a dedicated wiki site for collaborative monitoring and judging of emerging technologies? Such a site could be structured so that, unlike the halls of power, marginal voices have a space and are welcome…

It’s good to see this range of spirited and thoughtful contributions to the debate about the future of technological innovation.  Of course, this is just the tip of a very large iceberg of discussion, happening all over the Internet.  The really hard question, perhaps, is what is the optimal method and location for this debate?  Jim Thomas’ suggestion of a new wiki has some merit – provided it could become an authoritative and definitive wiki on emerging technologies, that rises above the vast crowd of many existing websites. Is there already such a wiki in existence?

Ten emerging technology trends to watch in the 2010’s

Filed under: AGI, nanotechnology, vision — David Wood @ 12:38 pm

On his “2020 science” blogAndrew Maynard of the Woodrow Wilson International Center for Scholars has published an excellent article “Ten emerging technology trends to watch over the next decade” that’s well worth reading.

To whet appetites, here’s his list of the ten emerging technologies:

  1. Geoengineering
  2. Smart grids
  3. Radical materials
  4. Synthetic biology
  5. Personal genomics
  6. Bio-interfaces
  7. Data interfaces
  8. Solar power
  9. Nootropics
  10. Cosmeceuticals

For the details, head over to the original article.

I see Andrew’s article as a more thorough listing of what I tried to cover in my own recent article, Predictions for the decade ahead, where I wrote:

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

Neither the words “nanotechnology” or “AI” appear in Andrew’s list.  Here’s what he has to say about nanotechnology:

Nanotech has been a dominant emerging technologies over the past ten years.  But in many ways, it’s a fake.  Advances in the science of understanding and manipulating matter at the nanoscale are indisputable, as are the early technology outcomes of this science.  But nanotechnology is really just a convenient shorthand for a whole raft of emerging technologies that span semiconductors to sunscreens, and often share nothing more than an engineered structure that is somewhere between 1 – 100 nanometers in scale.  So rather than focus on nanotech, I decided to look at specific technologies which I think will make a significant impact over the next decade.  Perhaps not surprisingly though, many of them depend in some way on working with matter at nanometer scales.

I think we are both right 🙂

Regarding AI, Andrew’s comments under the heading “Data interfaces” cover some of what I had in mind:

The amount of information available through the internet has exploded over the past decade.  Advances in data storage, transmission and processing have transformed the internet from a geek’s paradise to a supporting pillar of 21st century society.  But while the last ten years have been about access to information, I suspect that the next ten will be dominated by how to make sense of it all.  Without the means to find what we want in this vast sea of information, we are quite literally drowning in data.  And useful as search engines like Google are, they still struggle to separate the meaningful from the meaningless.  As a result, my sense is that over the next decade we will see some significant changes in how we interact with the internet.  We’re already seeing the beginnings of this in websites like Wolfram Alpha that “computes” answers to queries rather than simply returning search hits,  or Microsoft’s Bing, which helps take some of the guesswork out of searches.  Then we have ideas like The Sixth Sense project at the MIT Media Lab, which uses an interactive interface to tap into context-relevant web information.  As devices like phones, cameras, projectors, TV’s, computers, cars, shopping trolleys, you name it, become increasingly integrated and connected, be prepared to see rapid and radical changes in how we interface with and make sense of the web.

It looks like there’s lots of other useful material on the same blog.  I particularly like its subtitle “Providing a clear perspective on developing science and technology responsibly”.

Hat tip to @vangeest for the pointer!

24 December 2009

Predictions for the decade ahead

Before highlighting some likely key trends for the decade ahead – the 2010’s – let’s pause a moment to review some of the most important developments of the last ten years.

  • Technologically, the 00’s were characterised by huge steps forwards with social computing (“web 2.0”) and with mobile computing (smartphones and more);
  • Geopolitically, the biggest news has been the ascent of China to becoming the world’s #2 superpower;
  • Socioeconomically, the world is reaching a deeper realisation that current patterns of consumption cannot be sustained (without major changes), and that the foundations of free-market economics are more fragile than was previously widely thought to be the case;
  • Culturally and ideologically, the threat of militant Jihad, potentially linked to dreadful weaponry, has given the world plenty to think about.

Looking ahead, the 10’s will very probably see the following major developments:

  • Nanotechnology will progress in leaps and bounds, enabling increasingly systematic control, assembling, and reprogamming of matter at the molecular level;
  • In parallel, AI (artificial intelligence) will rapidly become smarter and more pervasive, and will be manifest in increasingly intelligent robots, electronic guides, search assistants, navigators, drivers, negotiators, translators, and so on.

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

We’ll see the following applications of nanotechnology and AI:

  • Energy harvesting, storage, and distribution (including via smart grids) will be revolutionised;
  • Reliance on existing means of oil production will diminish, being replaced by greener energy sources, such as next-generation solar power;
  • Synthetic biology will become increasingly commonplace – newly designed living cells and organisms that have been crafted to address human, social, and environmental need;
  • Medicine will provide more and more new forms of treatment, that are less invasive and more comprehensive than before, using compounds closely tailored to the specific biological needs of individual patients;
  • Software-as-a-service, provided via next-generation cloud computing, will become more and more powerful;
  • Experience of virtual worlds – for the purposes of commerce, education, entertainment, and self-realisation – will become extraordinarily rich and stimulating;
  • Individuals who can make wise use of these technological developments will end up significantly cognitively enhanced.

In the world of politics, we’ll see more leaders who combine toughness with openness and a collaborative spirit.  The awkward international institutions from the 00’s will either reform themselves, or will be superseded and surpassed by newer, more informal, more robust and effective institutions, that draw a lot of inspiration from emerging best practice in open source and social networking.

But perhaps the most important change is one I haven’t mentioned yet.  It’s a growing change of attitude, towards the question of the role in technology in enabling fuller human potential.

Instead of people decrying “technical fixes” and “loss of nature”, we’ll increasingly hear widespread praise for what can be accomplished by thoughtful development and deployment of technology.  As technology is seen to be able to provide unprecedented levels of health, vitality, creativity, longevity, autonomy, and all-round experience, society will demand a reprioritisation of resource allocation.  Previous sacrosanct cultural norms will fall under intense scrutiny, and many age-old beliefs and practices will fade away.  Young and old alike will move to embrace these more positive and constructive attitudes towards technology, human progress, and a radical reconsideration of how human potential can be fulfilled.

By the way, there’s a name for this mental attitude.  It’s “transhumanism”, often abbreviated H+.

My conclusion, therefore, is that the 2010’s will be the decade of nanotechnology, AI, and H+.

As for the question of which countries (or regions) will play the role of superpowers in 2020: it’s too early to say.

Footnote: Of course, there are major possible risks from the deployment of nanotechnology and AI, as well as major possible benefits.  Discussion of how to realise the benefits without falling foul of the risks will be a major feature of public discourse in the decade ahead.

How markets fail – part two

Filed under: books, Economics, market failure, regulation — David Wood @ 2:46 am

Free markets have been a tremendous force for progress.  However, they need oversight and regulation.  Lack of appreciation of this point is the fundamental cause of the Great Crunch that the world financial systems recently experienced.  That’s the essential message of the important book by the New Yorker journalist John Cassidy (pictured right), “How markets fail: the logic of economic calamities“.

I call this book “important” because it contains a sweeping but compelling survey of a notion Cassidy dubs “Utopian economics”, before providing layer after layer of decisive critique of that notion.  As such, the book provides a very useful (if occasionally drawn out) guide to the history of economic thinking, covering Adam Smith, Friedrich Hayek, Milton Friedman, John Maynard Keynes, Arthur Pigou, Hyman Minsky, and many, many others.

The key theme in the book is that markets do fail from time to time, potentially in disastrous ways, and that some element of government oversight and intervention is both critical and necessary, to avoid calamity.  This theme is hardly new, but many people resist it, and the book has the merit of marshalling the arguments more comprehensively than I have seen elsewhere.

As Cassidy describes it, “utopian economics” is the widespread view that the self-interest of individuals and agencies, allowed to express itself via a free market economy, will inevitably produce results that are good for the whole economy.  The book starts with eight chapters that sympathetically outline the history of thinking about utopian economics.  Along the way, he regularly points out instances when free market champions nevertheless described cases when government intervention and control was required.  For example, referring to Adam Smith, Cassidy writes:

Smith and his successors … believed that the government had a duty to protect the public from financial swindles and speculative panics, which were both common in 18th and 19th century Britain…

To prevent a recurrence of credit busts, Smith advocated preventing banks from issuing notes to speculative lenders.  “Such regulations may, no doubt, be considered as in some respects a violation of natural liberty”, he wrote.  “But these exertions of the natural liberty of a few individuals, which might endanger the security of the whole society, are, and ought to be, restrained by the laws of all governments…  The obligation of building party walls [between adjacent houses], in order to prevent the communication of a fire, is a violation of natural liberty, exactly of the same kind with the regulations of the banking trade which are here proposed.”

The book identifies long-time Federal Reserve chairman Alan Greenspan as one of the villains of the Great Crunch.  Near the beginning of the book, Cassidy quotes a reply given by Greenspan to the question “Were you wrong” asked of him in October 2008 by the US House Committee on Oversight and Government Reform:

“I made a mistake in presuming that the self-interest of organizations, specifically banks and others, were such that they were best capable of protecting their own shareholders and their equity in the firms…”

Greenspan was far from alone in his belief in the self-correcting power of economies in which self-interest is allowed to flourish.  There were many reasons for people to hold that belief.  It appeared to be justified both theoretically and empirically.  As Greenspan remarked,

“I had been going for forty years, or more, with very considerable evidence that it was working exceptionally well.”

Cassidy devotes another eight chapters to reviewing the history of criticisms of utopian economics.  This part of the book is entitled “Reality-based economics“.  It is full of fascinating and enlightening material, covering topics such as:

  • game theory (“the prisoners dilemma”),
  • behavioural economics (pioneered by Daniel Kahneman and Amos Tversky) – including disaster myopia,
  • problems of spillovers and externalities (such as pollution) – which can only be fully addressed by centralised collective action,
  • drawbacks of hidden information and the failure of “price signalling”,
  • loss of competiveness when monopoly conditions are approached,
  • flaws in banking risk management policies (which drastically under-estimated the consequences of larger deviations from “business as usual”),
  • problems with asymmetric bonus structure,
  • and the perverse psychology of investment bubbles.

In summary, Cassidy lists four “illusions” of utopian economics:

  1. The illusion of harmony: that free markets always generate good outcomes;
  2. The illusion of stability: that free market economy is sturdy;
  3. The illusion of predictability: that distribution of returns can be foreseen;
  4. The illusion of Homo Economicus: that individuals are rational and act on perfect information.

The common theme of this section is that of “rational irrationality”: circumstances in which it is rational for people to choose courses of action that end up producing a bad outcome for society as a whole.  You can read more about “rational irrationality” in a recent online New Yorker article of the same name, written by Cassidy:

A number of explanations have been proposed for the great boom and bust, most of which focus on greed, overconfidence, and downright stupidity on the part of mortgage lenders, investment bankers, and Wall Street C.E.O.s. According to a common narrative, we have lived through a textbook instance of the madness of crowds. If this were all there was to it, we could rest more comfortably: greed can be controlled, with some difficulty, admittedly; overconfidence gets punctured; even stupid people can be educated. Unfortunately, the real causes of the crisis are much scarier and less amenable to reform: they have to do with the inner logic of an economy like ours. The root problem is what might be termed “rational irrationality”—behavior that, on the individual level, is perfectly reasonable but that, when aggregated in the marketplace, produces calamity.

Consider the [lending] freeze that started in August of 2007. Each bank was adopting a prudent course by turning away questionable borrowers and holding on to its capital. But the results were mutually ruinous: once credit stopped flowing, many financial firms—the banks included—were forced to sell off assets in order to raise cash. This round of selling caused stocks, bonds, and other assets to decline in value, which generated a new round of losses.

A similar feedback loop was at work during the boom stage of the cycle, when many mortgage companies extended home loans to low- and middle-income applicants who couldn’t afford to repay them. In hindsight, that looks like reckless lending. It didn’t at the time. In most cases, lenders had no intention of holding on to the mortgages they issued. After taking a generous fee for originating the loans, they planned to sell them to Wall Street banks, such as Merrill Lynch and Goldman Sachs, which were in the business of pooling mortgages and using the monthly payments they generated to issue mortgage bonds. When a borrower whose home loan has been “securitized” in this way defaults on his payments, it is the buyer of the mortgage bond who suffers a loss, not the issuer of the mortgage.

This was the climate that produced business successes like New Century Financial Corporation, of Orange County, which originated $51.6 billion in subprime mortgages in 2006, making it the second-largest subprime lender in the United States…

The book then provides a seven chapter blow-by-blow run through of the events of the Great Crunch itself.  Much of this material is familiar from recent news coverage and from other books, but the context provided by the prior discussion of utopian economics and reality-based economics provides new insight into the individual tosses and turns of the unfolding crisis.  It becomes clear that the roots of the crunch go back much further than the “subprime mortgage crisis”.

The more worrying conclusion is that many of the conditions responsible for the Great Crunch remain in place:

In the world of utopian economics, the latest crisis of capitalism is always a blip.

As memories of September 2008 fade, revisionism and disaster myopia will become increasingly common.  Many will say that the Great Crunch wasn’t so bad, downplaying the government intervention that prevented a much, much worse outcome.  Incentives for excessive risk-taking will revive, and so will the lobbying power of banks and other financial firms.  If these special interests succeed in blocking meaningful reform, we could well end up with the worst of all worlds.

As Cassidy explains:

It won’t be as easy to deal with the bouts of instability to which our financial system is prone. But the first step is simply to recognize that they aren’t aberrations; they are the inevitable result of individuals going about their normal business in a relatively unfettered marketplace. Our system of oversight fails to account for how sensible individual choices can add up to collective disaster. Rather than blaming the pedestrians for swarming the footway, governments need to reinforce the foundations of the structure, by installing more stabilizers. “Our system failed in basic fundamental ways,” Treasury Secretary Timothy Geithner acknowledged earlier this year. “To address this will require comprehensive reform. Not modest repairs at the margin, but new rules of the game.”

Despite this radical statement of intent, serious doubts remain over whether the Obama Administration’s proposed regulatory overhaul goes far enough in dealing with the problem of rational irrationality…

In his final chapter, addressing the question “What is to be done?“, Cassidy advocates a few specific proposals, ranging from the specific to the over-arching:

  • Banks that create and distribute mortgage securities should be forced to keep some of them on their books (perhaps as much as a fifth) – to make them monitor more closely the types of loan they purchase;
  • Mortgage brokers and mortgage lenders should be regulated at the federal level;
  • The government should outlaw stated-income loans, and enforce the existing fraud laws for mortgage applicants, which make it a crime to misrepresent your personal finances;
  • Wall Street needs taming … the more systemic risk an institution poses, the more tightly it should be controlled;
  • The Federal Reserve should set rules for Wall Street compensation and bonuses that all firms would have to follow … the aim must be to prevent rationally irrational behaviour.  Unless some restrictions are placed on people’s actions, they will inevitably revert to it.

Footnote: For more by John Cassidy, see his online blog.

20 December 2009

Integration is hard

Filed under: integration, Linux — David Wood @ 5:29 pm

One phrase which I’ve uttered on numerous occasions in the last two years is,

Fragmentation is easy, integration is hard

Recently, I’ve taken to putting more emphasis on the “integration is hard” part of this phrase.  I keep seeing evidence, everywhere I look, of the importance of skills and systems that allow different variations and innovations to be easily incorporated back into wider systems.  A new technology, by itself, is generally insufficient to make an impact in society.  Lots of time-consuming system changes need to happen as well, before that technology can deliver “end-to-end” results.

In a small way, I’ve been seeing more evidence of this same point in the last few days, as I’ve been struggling to assemble a system of software to support a new application.

The application I’d like to support is the “IdeaTorrent” that powers, among other things, the Ubuntu Brainstorm site.

This software is an open source (free of charge) implementation of a voting and review system for ideas.  I’d like to provide a version for my own purposes.  The Ubuntu site helpfully has a link to the providers of the software, IdeaTorrent.

In principle, the installation instructions look simple enough:

  1. Use a webserver with PHP installed
  2. Install the database software PostgreSQL (v8.3)
  3. Install Drupal (content management software)
  4. Install IdeaTorrent
  5. Do some configuration.

In practice, things are proving tougher.  The word “however” appears quite a few times in what follows…

The first webserver I obtained had a different database installed, MySQL.  Although I heard from many people that, by various criteria, MySQL is preferable to PostgreSQL, the IdeaTorrent installation notes were clear that the system would work only if PostgreSQL were present.

However, with this webserver, I failed at the first hurdle.  I found I couldn’t install any new software, since it was a “shared” server that other people were using too.  That environment (not unreasonably) prohibited any user from gaining “root access” security permissions.  If I wanted to change the components, I would need a different working environment.

I therefore purchased a more expensive hosting system.  (I am hopeful of a refund for the first one – from the same supplier, GoDaddy).  This system does allow installation of new software.  And, to make things easier, it comes with PostgreSQL already installed.  That took me to the task of installing Drupal.  That task ended up taking me more than 24 hours elapsed time, as I went along a crash learning course about Linux file permissions, the BASH command shell, PHP, and Drupal itself.  So far, so good.

Finally I installed IdeaTorrent, and reconfigured Drupal so that it recognised the IdeaTorrent modules.

However, lengthy error messages started getting displayed.  Various functions were missing, so the system would not work.  The functions in question were meant to be supplied by PostgreSQL.  At this stage, I paid more attention to the part in the IdeaTorrent installation instructions that said “PostgreSQL 8.3 (Important! Won’t work below 8.3)”.  On checking, I found out that the system provided by GoDaddy has PostgreSQL 8.2.9.

8.2.9 looks numerically close to 8.3, but clearly it’s not close enough.

Further investigation showed me that Linux has an impressive looking system for assisting with upgrades of components.  I followed instructions that I found spread out over the Internet – including the quaintly titled article “An Almost Idiot’s Guide to PostgreSQL YUM“.

These instructions let me to downloading a small “.RPM” file onto the server, and then invoking a command called “YUM“.  (Apparently YUM stands for “Yellowdog Updater, Modified”!)  It looked like it was working.

However, when I started up the system again, I saw that the version of PostgreSQL that was running was still 8.2.9.

Another 24 hours elapsed – with lots of red herrings along the way.  I eventually realised that some of the online links had misled me to pick the wrong .RPM file.  (The one I was trying to install was actually, despite appearances, an even older version.)  Finally I found the right file.  This time, when I ran the YUM magic, lots more things happened.

However, these changes broke the admin “control panel” interface when GoDaddy provided, in order to administer databases.  Without this interface, the remainder of the installation (as required to set up Drupal) would probably be impossible.

Not for the first time, I went back to square one (GoDaddy have a handy service whereby they will return the whole system to its inital “provisioned” state).  This time, I was more careful in how I did the upgrades.  This time, the control panel interface kept working fine.

However, this time, although the functions were shown whereby I could administer the databases, the functions all told me:

Admin password is out of sync. Database management will be unavailable until the admin password is changed.

However, everything I try to do to enter (or redefine) the “admin password” fails.  My current guess is that the handling of admin passwords somehow changed between PostgreSQL v8.2 and PostgreSQL v8.3.

At the moment, I’m stuck again.  Perhaps I’ll figure out what password I ought to be typing in.  Or perhaps I’ll have the whole system reprovisioned yet again (for about the fifth time) and do the upgrades even more carefully this time.  (Any advice from readers of this blog will be gratefully received!)

I see my trials and tribulations of the last few days as an example of how it’s hard to alter individual components of a complex software system.  These components can have all kinds of dependencies on each other – explicit dependencies, implicit dependencies, data format dependencies, obscure dependencies, bug dependencies, and so on.

My takeaway isn’t to assign any blame to any particular system.  My takeaway is that:

  • Anyone planning to alter components in a complex software system should set aside plenty time to debug integration issues that arise along the way;
  • Anyone embarking on such an endeavour should be ready to call in expert advice from people who have significant prior experience in that same task.

Indeed, integration is hard.  However, that’s a reason for all of us to design systems that are “integration friendly”.

Added later

Since writing this blogpost, I’ve now understood why the “control panel” phpPgAdmin application was failing to accept passwords, and the problem is solved.

It turns out that a clean installation of PostgreSQL specifies (via the file pg_hba.conf) that user access is checked via an “ident” (identification) system that does not use passwords.  General advice is that this should be changed to use passwords instead.  Changing the word “ident” to “md5” in the above-mentioned configuration file does the trick – and phpPgAdmin is now working for me again.

16 December 2009

What’s in a name – pirate?

Filed under: brand, democracy, Intellectual property, openness, piracy — David Wood @ 7:24 pm

I’ve been taking a look at the website for the UK Pirate Party.

There’s quite a lot there which strikes a chord with me.  Here are some extracts:

The world is changing. The Pirate Party understands that the law needs to change to match the realities of life in the 21st century…

Reform copyright and patent law. We want to … reduce the excessive length of copyright protection… We want a patent system that doesn’t stifle innovation or make life saving drugs so expensive that patients die…

Ensure that everyone has real freedom of speech and real freedom to enjoy and participate in our shared culture…

The internet has turned our world into a global village.  Ideas can be shared at incredible speed, and at negligible cost.  The benefits are plain to see, but as a result, many vested interests are threatened.  The old guard works hard to preserve their power and their privilege, so we must work hard for our freedom.  The Pirate Party offers an alternative to the last century’s struggles between political left and political right.  We are open to anyone and everyone who wants to live in a fair and open society…

The Pirate Party UK offers a new way to tackle society’s problems, by releasing the potential of ideas, at the expense of corporate monopolies and the interests of a controlling state…

I ask myself: should I sign up to support this party – hoping to help it break the mould in UK politics?

I’m tempted.  But three things hold me back.

First, there are others items listed as priorities on the Pirate Party website, which seem much less important to me.  For example, I’m sympathetic to looking at the ideas “to legalise non-commercial file sharing”, but that hardly seems a black-and-white “no-brainer” deserving lots of my attention.  It’s not a principle I would nail to the mast.

Second, I wince at the description on the website of “the corrupt MPs who hold our nation’s cultural treasures to ransom, ignore our democratic wishes and undermine our civil liberties”.  I think this paints altogether too negative a view of existing UK politicians.  I’d rather find ways to collaborate with these existing MPs, rather than to out them and oppose them as “corrupt”.

Third, I’m thoroughly unesasy with the name “Pirate”.  This word has connotations which I think will prevent the party from “crossing the chasm” to gaining sufficient mainstream support.  Names are important.  If the party were called something like “The open party” rather than “The pirate party”, I suspect I (and many others) would be quicker to offer encouragement.

How markets fail – part one

Filed under: books, Economics, market failure — David Wood @ 1:45 am

I’m currently enjoying reading the new book by John Cassidy: “How markets fail: the logic of economic calamities“.

I was led to this book by the review of it in the Economist:

In “How Markets Fail”, Mr Cassidy, a British writer for the New Yorker, recounts the story of America’s housing boom and its devastating bust. It is more than just an account of the failures of regulators and the self-deception of bankers and homebuyers, although these are well covered. For Mr Cassidy, the deeper roots of the crisis lie in the enduring appeal of an idea: that society is always best served when individuals are left to pursue their self-interest in free markets. He calls this “Utopian economics”.

This approach turns much of the book into a very good history of economic thought…

Having set out the tenets of Utopian economics, the author then pokes holes in them. Individual self interest does not always benefit society, he argues, and draws on a deep pool of research (what he calls “reality-based economics”) to support his case…

I’m half-way through the book.  It’s a bit like a who-done-it page-turner: each additional section introduces new twists and turns.  I can hardly wait to find out what happens next 😉

But in the meantime, in parallel, I’ve got a minor market failure of my own to explore.  I’ll be grateful for insight that any readers can provide.

As well as being a fan of books, I’m a fan of audio books.  I’ve been downloading audio books from Audible.com for at least four years.  They’ve got a good selection.  However, I’m often surprised to notice that various books are missing from their catalogue.  I think to myself: such-and-such a book is really popular: why don’t Audible provide it?

The market failure I mentioned is that Audible frequently do have these books in audio format, but if I ever find them on their site, and click on them to buy them, they for some reason display a most irritating message:

“We are not authorized to sell this item to your geographic location”

It appears that the UI of Audible tries to hide such books from people, like myself, who are based in the UK.  (I’ve heard similar reports from people who are based in Australia.)  But sometimes there are glitches, and some of these books can be glimpsed.

For example, the front page of their website currently promotes an audio book that caught my attention immediately:

The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom

Paul Dirac was among the great scientific geniuses of the modern age. One of the discoverers of quantum mechanics, the most revolutionary theory of the past century, his contributions had a unique insight, eloquence, clarity, and mathematical power. His prediction of antimatter was one of the greatest triumphs in the history of physics.

One of Einstein’s most admired colleagues, Dirac was in 1933 the youngest theoretician ever to win the Nobel Prize in physics. Dirac’s personality is legendary…

Back in my days at Cambridge, I learned a lot about Dirac – both from studying mathematics, and from researching the philosophical implications of quantum mechanics.  Some of my lecturers had been taught, some decades earlier, by Dirac himself.  He’s a fascinating character.  I’d never come across a book about Dirac before.  So I jumped at the chance to download this audio book – until I hit the message

“We are not authorized to sell this item to your geographic location”

It doesn’t help me if I log out of the international website, Audible.com, and log into the UK-specific site Audible.co.uk instead.  I’ve learned from bitter experience that books which are “not authorized” for sale from one site fail likewise to show up on the other one.  Nor can I find this audio book on any other site.

What’s going on here? There are at least some customers in the UK who are prepared to spend money to purchase these audio books.  What’s the rationale for a restriction?  Why can’t we willing customers find a market where our “demand” can be balanced by “supply” of these audio books?  (Is it that the owner of the book is somehow reserving the opportunity to sell the audio book, in the UK, in due course, at a higher price than Audible are presently prepared to charge?)

Of course, this particular case of apparent market failure pales in comparison to the failures reviewed in Cassidy’s book – calamitous outcomes such as environmental degradation, lack of development of much-needed medicines that would primarily benefit poorer parts of the human population, and the recent global financial crisis.  My reason for writing about this case is that it is so annoying when it happens!

7 December 2009

Bangalore and the future of AI

Filed under: AGI, Bangalore, Singularity — David Wood @ 3:15 pm

I’m in the middle of a visit to the emerging hi-tech centre of excellence, Bangalore.  Today, I heard suggestions, at the Forum Nokia Developer Conference happening here, that Bangalore could take on many of the roles of Silicon Valley, in the next phase of technology entrepreneurship and revolution.

I can’t let the opportunity of this visit pass by, without reaching out to people in this vicinity willing to entertain and review more radical ideas about the future of technology.  Some local connections have helped me to arrange an informal get-together in a coffee shop tomorrow evening (Tuesday 8th Dec), in a venue reasonably close to the Taj Residency hotel.

We’ve picked the topic “The future of AI and the possible technological singularity“.

I’ll prepare a few remarks to kick off the conversation, and we’ll see how it goes from there!

Ideas likely to be covered include:

  • “Narrow” AI versus “General” AI;
  • A brief history of progress of AI;
  • Factors governing a possible increase in the capability of general AI – hardware changes, algorithm changes, and more;
  • The possibility of a highly disruptive “intelligence explosion“;
  • The possibility of research into what has been termed “friendly AI“;
  • Different definitions of the technological singularity;
  • The technology singularity in fiction – limitations of Hollywood vision;
  • Fantasy, existential risk, or optimal outcome?
  • Risks, opportunities, and timescales?

If anyone wants to join this get-together, please drop me an email, text message, or Twitter DM, and I’ll confirm the venue.

6 December 2009

The art of community

Filed under: books, catalysts, collaboration, ecosystem management — David Wood @ 8:42 pm

A PDF version of the presentation I gave last Thursday to a meeting of the Software/Open Source SIG of the Cambridge Wireless Network, “Open ecosystems – Communities that build the future“, is now available for download from the resources page of the Cambridge Wireless website.

The overall contents of my presentation are introduced by the text from slide 2:

Slide 12 provides a summary of the second half of my presentation

Someone who clearly shares my belief in the importance of community, and in the fact that there are key management skills that need to be brought to bear to get the best out of the potential of a community, is Jono Bacon, who works at Canonical as the Ubuntu Community Manager.  Jono’s recent book, “The art of community: building the new age of participation” has been widely praised – deservedly so.

The whole book is available online for free download.

Over the course of 11 chapters spanning 360 pages, Jono provides a host of practical advice about how to best cultivate a community.  Although many of the examples he provides are rooted in the world of open source software (and, in particular, the community which supports the Ubuntu distribution of Linux), the principles generally apply far more widely – to all sorts of communities, particularly communities with a significant online presence and significant numbers of volunteers.  To quote from the preface:

The Art of Community is not specifically focused on computing communities, and the vast majority of its content is useful for anything from political groups to digital rights to knitting and beyond.

Within this wide range of possible communities, this book will be useful for a range of readers:

  • Professional community managers – If you work in the area of community management professionally
  • Volunteers and community leaders – If you want to build a strong and vibrant community for your volunteer project
  • Commercial organizations – If you want to work with, interact with, or build a community around your product or service
  • Open source developers – If you want to build a successful project, manage contributors, and build buzz
  • Marketeers – If you want to learn about viral marketing and building a following around a product or service
  • Activists – If you want to get people excited about your cause

Every chapter in this book is applicable to each of these roles. While technology communities provide many examples throughout the book, the purpose of these examples requires little technical knowledge.

I’ve just finished reading all 360 pages.  Each new chapter introduces important new principles and techniques.  I was reading the book for three reasons:

  1. To compare ideas about the best way to run parts of an open source software community (as used to be part of my responsibilities at the Symbian Foundation);
  2. To get ideas about how to boost the emerging community of people who share my interest in the “Humanity Plus” ideas covered in some of my other blog postings;
  3. To consider the possible wider role of well-catalysed communities to address the bigger challenges and opportunities facing society at the present time;

The book succeeded, for me, on all three levels.  Parts that I particularly liked included:

  • The importance of establishing a compelling mission statement for a community (Chapter 2)
  • Tips on building simple, effective, and nonbureaucratic processes that enable your community to conduct tasks, work together, and share their successes (Chapter 4)
  • How to build excitement and buzz around your community – and some telling examples of how not to do this (Chapter 6)
  • The importance of open and transparent community governance principles – and some reasons for occasionally limiting openness (Chapter 8)
  • Guidance on how to identify, handle, and prevent irksome conflict (ahead of time, if possible), and on dealing with divisive personalities (Chapter 9)
  • Ideas on running events – where (if done right) the “community” feeling can deepen to something more akin to “family” (Chapter 10).

(This blogpost contains an extended table of contents for Jono’s book.  And see here for a short video of Jono describing his book.)

The very end of the book mentions an annual conference called “The community leadership summit”.  To quote from the event website:

Take the microphone and join experienced community leaders and organizers to discuss, debate and explore the many avenues of building strong community in an open unconference setting, complimented by additional structured presentations.

I’m attracted by the idea of participating in the 2010 version of that summit 🙂

« Newer PostsOlder Posts »

Blog at WordPress.com.