24 December 2009

Predictions for the decade ahead

Before highlighting some likely key trends for the decade ahead – the 2010’s – let’s pause a moment to review some of the most important developments of the last ten years.

  • Technologically, the 00’s were characterised by huge steps forwards with social computing (“web 2.0”) and with mobile computing (smartphones and more);
  • Geopolitically, the biggest news has been the ascent of China to becoming the world’s #2 superpower;
  • Socioeconomically, the world is reaching a deeper realisation that current patterns of consumption cannot be sustained (without major changes), and that the foundations of free-market economics are more fragile than was previously widely thought to be the case;
  • Culturally and ideologically, the threat of militant Jihad, potentially linked to dreadful weaponry, has given the world plenty to think about.

Looking ahead, the 10’s will very probably see the following major developments:

  • Nanotechnology will progress in leaps and bounds, enabling increasingly systematic control, assembling, and reprogamming of matter at the molecular level;
  • In parallel, AI (artificial intelligence) will rapidly become smarter and more pervasive, and will be manifest in increasingly intelligent robots, electronic guides, search assistants, navigators, drivers, negotiators, translators, and so on.

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

We’ll see the following applications of nanotechnology and AI:

  • Energy harvesting, storage, and distribution (including via smart grids) will be revolutionised;
  • Reliance on existing means of oil production will diminish, being replaced by greener energy sources, such as next-generation solar power;
  • Synthetic biology will become increasingly commonplace – newly designed living cells and organisms that have been crafted to address human, social, and environmental need;
  • Medicine will provide more and more new forms of treatment, that are less invasive and more comprehensive than before, using compounds closely tailored to the specific biological needs of individual patients;
  • Software-as-a-service, provided via next-generation cloud computing, will become more and more powerful;
  • Experience of virtual worlds – for the purposes of commerce, education, entertainment, and self-realisation – will become extraordinarily rich and stimulating;
  • Individuals who can make wise use of these technological developments will end up significantly cognitively enhanced.

In the world of politics, we’ll see more leaders who combine toughness with openness and a collaborative spirit.  The awkward international institutions from the 00’s will either reform themselves, or will be superseded and surpassed by newer, more informal, more robust and effective institutions, that draw a lot of inspiration from emerging best practice in open source and social networking.

But perhaps the most important change is one I haven’t mentioned yet.  It’s a growing change of attitude, towards the question of the role in technology in enabling fuller human potential.

Instead of people decrying “technical fixes” and “loss of nature”, we’ll increasingly hear widespread praise for what can be accomplished by thoughtful development and deployment of technology.  As technology is seen to be able to provide unprecedented levels of health, vitality, creativity, longevity, autonomy, and all-round experience, society will demand a reprioritisation of resource allocation.  Previous sacrosanct cultural norms will fall under intense scrutiny, and many age-old beliefs and practices will fade away.  Young and old alike will move to embrace these more positive and constructive attitudes towards technology, human progress, and a radical reconsideration of how human potential can be fulfilled.

By the way, there’s a name for this mental attitude.  It’s “transhumanism”, often abbreviated H+.

My conclusion, therefore, is that the 2010’s will be the decade of nanotechnology, AI, and H+.

As for the question of which countries (or regions) will play the role of superpowers in 2020: it’s too early to say.

Footnote: Of course, there are major possible risks from the deployment of nanotechnology and AI, as well as major possible benefits.  Discussion of how to realise the benefits without falling foul of the risks will be a major feature of public discourse in the decade ahead.

How markets fail – part two

Filed under: books, Economics, market failure, regulation — David Wood @ 2:46 am

Free markets have been a tremendous force for progress.  However, they need oversight and regulation.  Lack of appreciation of this point is the fundamental cause of the Great Crunch that the world financial systems recently experienced.  That’s the essential message of the important book by the New Yorker journalist John Cassidy (pictured right), “How markets fail: the logic of economic calamities“.

I call this book “important” because it contains a sweeping but compelling survey of a notion Cassidy dubs “Utopian economics”, before providing layer after layer of decisive critique of that notion.  As such, the book provides a very useful (if occasionally drawn out) guide to the history of economic thinking, covering Adam Smith, Friedrich Hayek, Milton Friedman, John Maynard Keynes, Arthur Pigou, Hyman Minsky, and many, many others.

The key theme in the book is that markets do fail from time to time, potentially in disastrous ways, and that some element of government oversight and intervention is both critical and necessary, to avoid calamity.  This theme is hardly new, but many people resist it, and the book has the merit of marshalling the arguments more comprehensively than I have seen elsewhere.

As Cassidy describes it, “utopian economics” is the widespread view that the self-interest of individuals and agencies, allowed to express itself via a free market economy, will inevitably produce results that are good for the whole economy.  The book starts with eight chapters that sympathetically outline the history of thinking about utopian economics.  Along the way, he regularly points out instances when free market champions nevertheless described cases when government intervention and control was required.  For example, referring to Adam Smith, Cassidy writes:

Smith and his successors … believed that the government had a duty to protect the public from financial swindles and speculative panics, which were both common in 18th and 19th century Britain…

To prevent a recurrence of credit busts, Smith advocated preventing banks from issuing notes to speculative lenders.  “Such regulations may, no doubt, be considered as in some respects a violation of natural liberty”, he wrote.  “But these exertions of the natural liberty of a few individuals, which might endanger the security of the whole society, are, and ought to be, restrained by the laws of all governments…  The obligation of building party walls [between adjacent houses], in order to prevent the communication of a fire, is a violation of natural liberty, exactly of the same kind with the regulations of the banking trade which are here proposed.”

The book identifies long-time Federal Reserve chairman Alan Greenspan as one of the villains of the Great Crunch.  Near the beginning of the book, Cassidy quotes a reply given by Greenspan to the question “Were you wrong” asked of him in October 2008 by the US House Committee on Oversight and Government Reform:

“I made a mistake in presuming that the self-interest of organizations, specifically banks and others, were such that they were best capable of protecting their own shareholders and their equity in the firms…”

Greenspan was far from alone in his belief in the self-correcting power of economies in which self-interest is allowed to flourish.  There were many reasons for people to hold that belief.  It appeared to be justified both theoretically and empirically.  As Greenspan remarked,

“I had been going for forty years, or more, with very considerable evidence that it was working exceptionally well.”

Cassidy devotes another eight chapters to reviewing the history of criticisms of utopian economics.  This part of the book is entitled “Reality-based economics“.  It is full of fascinating and enlightening material, covering topics such as:

  • game theory (“the prisoners dilemma”),
  • behavioural economics (pioneered by Daniel Kahneman and Amos Tversky) – including disaster myopia,
  • problems of spillovers and externalities (such as pollution) – which can only be fully addressed by centralised collective action,
  • drawbacks of hidden information and the failure of “price signalling”,
  • loss of competiveness when monopoly conditions are approached,
  • flaws in banking risk management policies (which drastically under-estimated the consequences of larger deviations from “business as usual”),
  • problems with asymmetric bonus structure,
  • and the perverse psychology of investment bubbles.

In summary, Cassidy lists four “illusions” of utopian economics:

  1. The illusion of harmony: that free markets always generate good outcomes;
  2. The illusion of stability: that free market economy is sturdy;
  3. The illusion of predictability: that distribution of returns can be foreseen;
  4. The illusion of Homo Economicus: that individuals are rational and act on perfect information.

The common theme of this section is that of “rational irrationality”: circumstances in which it is rational for people to choose courses of action that end up producing a bad outcome for society as a whole.  You can read more about “rational irrationality” in a recent online New Yorker article of the same name, written by Cassidy:

A number of explanations have been proposed for the great boom and bust, most of which focus on greed, overconfidence, and downright stupidity on the part of mortgage lenders, investment bankers, and Wall Street C.E.O.s. According to a common narrative, we have lived through a textbook instance of the madness of crowds. If this were all there was to it, we could rest more comfortably: greed can be controlled, with some difficulty, admittedly; overconfidence gets punctured; even stupid people can be educated. Unfortunately, the real causes of the crisis are much scarier and less amenable to reform: they have to do with the inner logic of an economy like ours. The root problem is what might be termed “rational irrationality”—behavior that, on the individual level, is perfectly reasonable but that, when aggregated in the marketplace, produces calamity.

Consider the [lending] freeze that started in August of 2007. Each bank was adopting a prudent course by turning away questionable borrowers and holding on to its capital. But the results were mutually ruinous: once credit stopped flowing, many financial firms—the banks included—were forced to sell off assets in order to raise cash. This round of selling caused stocks, bonds, and other assets to decline in value, which generated a new round of losses.

A similar feedback loop was at work during the boom stage of the cycle, when many mortgage companies extended home loans to low- and middle-income applicants who couldn’t afford to repay them. In hindsight, that looks like reckless lending. It didn’t at the time. In most cases, lenders had no intention of holding on to the mortgages they issued. After taking a generous fee for originating the loans, they planned to sell them to Wall Street banks, such as Merrill Lynch and Goldman Sachs, which were in the business of pooling mortgages and using the monthly payments they generated to issue mortgage bonds. When a borrower whose home loan has been “securitized” in this way defaults on his payments, it is the buyer of the mortgage bond who suffers a loss, not the issuer of the mortgage.

This was the climate that produced business successes like New Century Financial Corporation, of Orange County, which originated $51.6 billion in subprime mortgages in 2006, making it the second-largest subprime lender in the United States…

The book then provides a seven chapter blow-by-blow run through of the events of the Great Crunch itself.  Much of this material is familiar from recent news coverage and from other books, but the context provided by the prior discussion of utopian economics and reality-based economics provides new insight into the individual tosses and turns of the unfolding crisis.  It becomes clear that the roots of the crunch go back much further than the “subprime mortgage crisis”.

The more worrying conclusion is that many of the conditions responsible for the Great Crunch remain in place:

In the world of utopian economics, the latest crisis of capitalism is always a blip.

As memories of September 2008 fade, revisionism and disaster myopia will become increasingly common.  Many will say that the Great Crunch wasn’t so bad, downplaying the government intervention that prevented a much, much worse outcome.  Incentives for excessive risk-taking will revive, and so will the lobbying power of banks and other financial firms.  If these special interests succeed in blocking meaningful reform, we could well end up with the worst of all worlds.

As Cassidy explains:

It won’t be as easy to deal with the bouts of instability to which our financial system is prone. But the first step is simply to recognize that they aren’t aberrations; they are the inevitable result of individuals going about their normal business in a relatively unfettered marketplace. Our system of oversight fails to account for how sensible individual choices can add up to collective disaster. Rather than blaming the pedestrians for swarming the footway, governments need to reinforce the foundations of the structure, by installing more stabilizers. “Our system failed in basic fundamental ways,” Treasury Secretary Timothy Geithner acknowledged earlier this year. “To address this will require comprehensive reform. Not modest repairs at the margin, but new rules of the game.”

Despite this radical statement of intent, serious doubts remain over whether the Obama Administration’s proposed regulatory overhaul goes far enough in dealing with the problem of rational irrationality…

In his final chapter, addressing the question “What is to be done?“, Cassidy advocates a few specific proposals, ranging from the specific to the over-arching:

  • Banks that create and distribute mortgage securities should be forced to keep some of them on their books (perhaps as much as a fifth) – to make them monitor more closely the types of loan they purchase;
  • Mortgage brokers and mortgage lenders should be regulated at the federal level;
  • The government should outlaw stated-income loans, and enforce the existing fraud laws for mortgage applicants, which make it a crime to misrepresent your personal finances;
  • Wall Street needs taming … the more systemic risk an institution poses, the more tightly it should be controlled;
  • The Federal Reserve should set rules for Wall Street compensation and bonuses that all firms would have to follow … the aim must be to prevent rationally irrational behaviour.  Unless some restrictions are placed on people’s actions, they will inevitably revert to it.

Footnote: For more by John Cassidy, see his online blog.

16 December 2009

How markets fail – part one

Filed under: books, Economics, market failure — David Wood @ 1:45 am

I’m currently enjoying reading the new book by John Cassidy: “How markets fail: the logic of economic calamities“.

I was led to this book by the review of it in the Economist:

In “How Markets Fail”, Mr Cassidy, a British writer for the New Yorker, recounts the story of America’s housing boom and its devastating bust. It is more than just an account of the failures of regulators and the self-deception of bankers and homebuyers, although these are well covered. For Mr Cassidy, the deeper roots of the crisis lie in the enduring appeal of an idea: that society is always best served when individuals are left to pursue their self-interest in free markets. He calls this “Utopian economics”.

This approach turns much of the book into a very good history of economic thought…

Having set out the tenets of Utopian economics, the author then pokes holes in them. Individual self interest does not always benefit society, he argues, and draws on a deep pool of research (what he calls “reality-based economics”) to support his case…

I’m half-way through the book.  It’s a bit like a who-done-it page-turner: each additional section introduces new twists and turns.  I can hardly wait to find out what happens next 😉

But in the meantime, in parallel, I’ve got a minor market failure of my own to explore.  I’ll be grateful for insight that any readers can provide.

As well as being a fan of books, I’m a fan of audio books.  I’ve been downloading audio books from Audible.com for at least four years.  They’ve got a good selection.  However, I’m often surprised to notice that various books are missing from their catalogue.  I think to myself: such-and-such a book is really popular: why don’t Audible provide it?

The market failure I mentioned is that Audible frequently do have these books in audio format, but if I ever find them on their site, and click on them to buy them, they for some reason display a most irritating message:

“We are not authorized to sell this item to your geographic location”

It appears that the UI of Audible tries to hide such books from people, like myself, who are based in the UK.  (I’ve heard similar reports from people who are based in Australia.)  But sometimes there are glitches, and some of these books can be glimpsed.

For example, the front page of their website currently promotes an audio book that caught my attention immediately:

The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom

Paul Dirac was among the great scientific geniuses of the modern age. One of the discoverers of quantum mechanics, the most revolutionary theory of the past century, his contributions had a unique insight, eloquence, clarity, and mathematical power. His prediction of antimatter was one of the greatest triumphs in the history of physics.

One of Einstein’s most admired colleagues, Dirac was in 1933 the youngest theoretician ever to win the Nobel Prize in physics. Dirac’s personality is legendary…

Back in my days at Cambridge, I learned a lot about Dirac – both from studying mathematics, and from researching the philosophical implications of quantum mechanics.  Some of my lecturers had been taught, some decades earlier, by Dirac himself.  He’s a fascinating character.  I’d never come across a book about Dirac before.  So I jumped at the chance to download this audio book – until I hit the message

“We are not authorized to sell this item to your geographic location”

It doesn’t help me if I log out of the international website, Audible.com, and log into the UK-specific site Audible.co.uk instead.  I’ve learned from bitter experience that books which are “not authorized” for sale from one site fail likewise to show up on the other one.  Nor can I find this audio book on any other site.

What’s going on here? There are at least some customers in the UK who are prepared to spend money to purchase these audio books.  What’s the rationale for a restriction?  Why can’t we willing customers find a market where our “demand” can be balanced by “supply” of these audio books?  (Is it that the owner of the book is somehow reserving the opportunity to sell the audio book, in the UK, in due course, at a higher price than Audible are presently prepared to charge?)

Of course, this particular case of apparent market failure pales in comparison to the failures reviewed in Cassidy’s book – calamitous outcomes such as environmental degradation, lack of development of much-needed medicines that would primarily benefit poorer parts of the human population, and the recent global financial crisis.  My reason for writing about this case is that it is so annoying when it happens!

19 November 2009

ELF09: energy, sustainability, and more

Filed under: Economics, Energy, green, solar energy — David Wood @ 3:12 am

On Tuesday I attended the ninth Business Week “European Leadership Forum”, also known by its Twitter hash tag #elf09Business Week are to be congratulated for bringing together a fascinating group of industry leaders.

Here are a few of the points from the course of the day that made me think.

The threat of a new economic crisis

Professor Urs Muller, Managing Director and Chief Economist at BAK Basel Economics, had some worrying thoughts about the state of the global economy:

The good news is that the economic crisis is over.  The bad news is that the conditions responsible for the crisis are still intact, and the next crisis is already brewing.

Like various other speakers and panellists, Professor Muller was concerned about the state of regulation of banking activities.  As we discussed afterwards: “Who would be a regulator?”

It’s hard to identify and agree which elements of banking need new regulation regimes, and which don’t.  However, action by one country alone (for example, by the UK) would fail, since it would merely drive key lines of business elsewhere.  Coordination is needed – but hard!

I asked, how much time do we have?  Do governments have around ten years to reach agreement and take action, or are things more urgent?  Professor Muller replied that if matters were not resolved during 2010, it might already be too late.  Unfortunately, the side effect of the current crisis appearing to be over, is that government attention is liable to diminish.  Everyone is breathing a sigh of relief, prematurely.

This ominous discussion reminded me of remarks made by eminent economist and FT columnist John Kay a few days earlier, at a lunchtime meeting at the RSA, “Banking in the Wake of the Crisis: how will confidence be restored?”  That meeting addressed the questions:

  • Have banks and bankers have really learned the lessons of the crisis?
  • Are we in danger of falling into a dangerous cycle once more?

John Kay gave the answers No and Yes.

On a more positive note, Professor Muller highlighted the FSB (Financial Stability Board) as a cross-border organisation with a strong potential to address banking system vulnerabilities and to develop and implement strong regulatory, supervisory and other policies in the interest of financial stability.  John Kay’s recommendations – in favour of what is called “Narrow banking” – are contained in a 95-page PDF “The Reform of Banking Regulation” available from his website.

In search of the European Bill Gates

Earlier in the day, INSEAD Professor Soumitra Dutta and serial technology entrepreneur Niklas Zennström led a discussion “INNOVATION – What is the next generation? The next wave?”

Questions posed included why there was no real equivalent, in Europe, to Bill Gates, and which field of technology is likely to prove the most important in the near-term future.

I liked the answer given by Professor Dutta:

The next big wave of hitech innovation is improving the quality of life – including both improving the environment, and improving healthcare.

However, these technologies should not be viewed as alternatives to ICT (Information and Communications Technology).  Instead, these technology areas will succeed by implementing the next wave of ICT.  But instead of just experiencing “the Internet of websites”, we will see “the Internet of things”.

Alternatives to dependency on growth

Running near the surface of much of the discussion during the day was the theme of growth and sustainability.

Opening keynote speaker Stephen Green, Group Chairman of HSBC Holdings Plc, put it as follows:

The biggest change arising from the economic crisis is that companies must stop focussing on short-term value maximisation, and should instead focus on sustainable value maximisation.

Later, from the floor, Professor Dutta posed the simple question,

Is growth good?

I didn’t hear a satisfactory answer.  I did hear the answer that “business needs growth”, but that just skirts the issue.

Interestingly, Mikhail Gorbachev addressed the same issue in his keynote address at the General Assembly conference of the Club of Rome on 26 October 2009, in Amsterdam.  Here’s an extract:

A low-carbon economy is only a part of this new economic model we need so badly today. The model that has been around for the past five decades should be replaced. Of course, it cannot be achieved overnight, but I think we can already discuss reference points and general contours of this new model.

It means, above all, the overcoming of the economy’s ‘addiction’ to super-profits and hyper-consumption, which is not possible unless societies reshape their values. It means shifting of the increasingly larger swaths of the economy to production of ‘social goods’, among which the sustainable environment takes a centre stage.

These social goods also include human health in the broad sense of the word, education, culture, equal opportunities, and social unity, including the elimination of the glaring gaps between the rich and the poor.

Society needs all this not only because ethical imperatives dictate it. The economic benefits to be brought by these “goods” are enormous. However, economists are yet to learn how to measure them. An intellectual breakthrough is needed here. A new model of economy can not be built without it.

Energy and sustainability

The #elf09 gathering split up during the afternoon into a series of six parallel discussions.  Along with around 40 other people, I took part in a roundtable discussion on “Energy and sustainability”.

The discussion was led by Mark Williams, Downstream Director of Royal Dutch Shell, and Sophia Tickell, Executive Director of SustainAbility.

Mark Williams made the following points (I apologise in advance for condensing a much richer set of messages):

  • Almost certainly, the total energy needs of the world will double by 2050;
  • It seems highly unlikely that this vast energy requirement can be met by non-fossil fuels;
  • We need to prepare for a scenario in which at least 70% of the world’s energy needs in 2050 will still be met by fossil fuels;
  • In other words, “we have to come to grips with carbon”;
  • Even as we continue to rely on fossil fuels, we have to “decarbonise” the system;
  • There’s no reasonable alternative to developing and deploying technology for widespread CCS (Carbon Capture and Storage);
  • It’s already possible to store CO2 underground, safely, “for geological amounts of time”;
  • It’s true that there is public concern over the prospect of leaks of stored CO2, and over failures in warning systems to detect leaks, but “governments will have to take the lead in public education”.

Timescales to adopt new sources of energy

Mark Williams made the point that, so far, it has taken any new source of energy at least 25 years to achieve 1% of global energy delivery.  That point should be kept in mind, to avoid anyone becoming “too optimistic about new energy sources”.

In response, people around the table asked:

  • Would the equivalent of a war-time situation provide a different kind of reaction from both markets and governments?  Do we have to accept that we’ll have the same mindsets as before?

Mark answered:

  • Don’t underestimate “the tyranny of the installed base”;
  • Alternative energy sources have to face very significant issues with storage and transport: “electricity is not easily stored”.

I tried a different tack:

  • Consider the fact that, 25 years ago, there were virtually no mobile phones in use.  Over that timescale, enormous infrastructure has been put in place around the planet, and nowadays more than half of the world’s population use mobile phones.  Countless technical difficulties were solved en route;
  • Key to this build-out has been the fact that many companies were prepared to make huge financial investments, anticipating even larger financial paybacks as people use mobile technology;
  • If energy pricing is set properly (including full consideration for “negative externalities“), won’t companies find sufficient incentives to invest heavily in sustainable energy sources, and develop solutions – roughly similar to what happened for the mobile industry?
  • As a specific example, what about the prospects for gigantic harvesting of solar energy from a scheme such as Desertec (as described here)?

Mark answered:

  • The investment needed for new energy sources (at the scale required) dwarfs the investment even of the mobile telephony industry;
  • New energy sources have too much ground to catch up.  For example, every year, China installs as many additional coal-based energy generators as the entire existing UK installed base of such generators.

Around the table, it seemed generally agreed that we do need to prepare for a scenario in which fossil fuels remain in very substantial use over the decades ahead.

The role of green subsidies

Sophia Tickell raised the question of whether government subsidies could make a significant difference to the speed of transition to renewable energy sources.  South Korea is perhaps the leading example of where a government green stimulus package is having a significant effect.

Attractive beneficiaries for government subsidies (to recap earlier discussion) would presumably include products for electrical storage and CCS.

On the other hand, it’s possible for governments to pick losers as well as winners, with consequent waste of public funds.  Also, government subsidies can in some cases lead to technology failing to develop as efficiently and as innovatively as it ought to.  For this reason, it was suggested that “the environmental movement may have oversold the idea of a Green New Deal”.

Discussion continued:

  • Government should be putting the right framework in place, for market mechanisms to drive the selection and development of desirable products.  This includes identifying and allocating the costs of negative externalities, and establishing a proper “level playing field”;
  • When a desirable momentum is emerging in the marketplace, governments should be getting behind it.

I asked: is it already clear what is this “desirable momentum” that governments should be getting behind?  People around the table started listing options.  It quickly became a long list.  This provoked the following insightful comment from Juan Pablo Crespi, COO Europe of Alkol – to whom I’ll give the final word:

There are too many momentums – but not enough permanentums!

11 October 2008

Serious advice to developers in tough times

Filed under: Economics, FOWA, openness, regulation — David Wood @ 6:57 pm

As I mentioned in my previous article, the FOWA London event on “The Future of Web Apps” featured a great deal of passion and enthusiasm for technology and software development systems. However, as I watched the presentations on Day Two, I was repeatedly struck by a deeper level of seriousness.

For example, AMEE Director Gavin Starks urged the audience to consider how changes in their applications could help reduce CO2 emissions. AMEE has exceptionally large topics on its mind: the acronym stands for “Avoiding Mass Extinctions Engine“. Gavin sought to raise the aspiration level of developers: “If you really want to build an app that will change the world, how about building an app that will save the Earth?” But this talk was no pious homily: it contained several dozen ideas that could in principle act as possible starting points for new business ventures.

On a different kind of serious topic, Mahalo.com CEO Jason Calacanis elicited some gasps from the audience when he dared to suggest that, if startups are really serious about making a big mark in the business world, they should consider firing, not only their “average” employees, but also their “good” employees – under the rationale that “good is the enemy of the great“. The resulting audience Q&A could have continued the whole afternoon.

But the most topical presentation was the opening keynote by Sun Microsystems Distinguished Engineer Tim Bray. It started with a bang – with the words “I’m Scared” displayed in huge font on the screen.

With these words, Tim announced that he had, the previous afternoon, torn up the presentation he was previously planning to give – a presentation entitled “What to be Frightened of in Building A Web Application“.

Tim explained that the fear he would now address in his talk was about global economic matters rather than about usage issues with the likes of XML, Rails, and Flash. Instead of these technology-focused matters, he would instead cover the subject “Getting through the tough times“.

Tim described how he had spent several days in London ahead of the conference, somewhat jet lagged, watching lots of TV coverage about the current economic crisis. As he said, the web has the advantage of allowing everyone to get straight to the sources – and these sources are frightening, when you take the time to look at them. Tim explicitly referenced http://acrossthecurve.com/?p=1830, which contains the following gloomy prognosis:

…more and more it seems likely that the resolution of this crisis will be an historic financial calamity. Each and every step which central banks and regulators have taken to resolve the crisis has been met with failure. In the beginning, the steps would produce some brief stability.

In the last several days, the US Congress (belatedly) passed a bailout bill, the Federal Reserve has guaranteed commercial paper and in unprecedented coordination central banks around the globe slash base lending rates. Listen to the markets respond.

The market scoffs as Libor rises, stocks plummet and IBM is forced to pay usurious rates to borrow. There is no stability and no hiatus from the pain. It continues unabated in spite of the best efforts of dedicated people to solve it.

We are in the midst of an unfolding debacle. It is happening about us. I am not sure how or when it ends, but the end, when it arrives, will radically alter the way we live for a long time.

Whoever wins the US election and takes office in January will need prayers and divine intervention.

As Tim put it: “We’ve been running on several times the amount of money that actually exists. Now we’re going to have to manage on nearer the amount of money that does exist.” And to make things even more colourful, he said that the next few days could be like the short period of time in New Orleans after hurricane Katrina had passed, but before the floods struck (caused by damage brought about by the winds). For the world’s economy, the hurricane may have passed, but the flood is still to come.

The rest of Tim’s talk was full of advice that sounded, to me, as highly practical, for what developers should do, to increase their chances of survival through these tough times. (There’s a summary video here.) I paraphrase some highlights from my notes:

Double down and do a particularly good job. In these times, slack work could put your company out of business – or could cause your employer to decide your services are no longer necessary.

Large capital expenditures are a no-no. Find ways to work that don’t result in higher management being asked to sign large bills – they won’t.

Waterfalls are a no-no. No smart executive is going to commit to a lengthy project that will take longer than a year to generate any payback. Instead, get with the agile movement – pick out the two or three requirements in your project that you can deliver incrementally and which will result in payback in (say) 8-10 weeks.

Software licences are a no-no. Companies will no longer make large commitments to big licences for the likes of Oracle solutions. Open source is going to grow in prominence.

Contribute to open source projects. This is a great way to build professional credibility – to advertise your capabilities to potential new employers or business partners.

Get in the cloud. With cloud services, you only pay a small amount in the beginning, and you only pay larger amounts when traffic is flowing.

Stop believing in technology religions. The web is technologically heterogeneous. Be prepared to learn new skills, to adopt new programming languages, or to change the kinds of applications you develop.

Think about the basic needs of users. There will be less call for applications about fun things, or about partying and music. There will be more demand for applications that help people to save money – for example, the lowest gas bill, or the cheapest cell phone costs.

Think about telecomms. Users will give up their HDTVs, their SUVs, and their overseas holidays, but they won’t give up their cell phones. The iPhone and the Android are creating some great new opportunities. Developers of iPhone applications are earning themselves hundreds of thousands of dollars from applications that cost users only $1.99 per download. Developers in the audience should consider migrating some of their applications to mobile – or creating new applications for mobile.

The mention of telecomms quickened my pulse. On the one hand, I liked Tim’s emphasis on the likely continuing demand for high-value low-cost mobile solutions. On the other hand, I couldn’t help noticing there were references to iPhone and Android, but not to Symbian (or to any of the phone manufacturers who are using Symbian software).

Then I reflected that, similarly, namechecks were missing for RIM, Windows Mobile, and Palm. Tim’s next words interrupted this chain of thought and provided further explanation: With the iPhone and Android, no longer are the idiotic moronic mobile network operators standing in the way with a fence of barbed wire between developers and the people who actually buy phones.

This fierce dislike for network operator interference was consistent with a message underlying the whole event: developers should have the chance to show what they can do, using their talent and their raw effort, without being held up by organisational obstacles and value-chain choke-points. Developers dislike seemingly arbitrary regulation. That’s a message I take very seriously.

However, we can’t avoid all regulation. Indeed – to turn back from applications to economics – lack of regulation is arguably a principal cause of our current economic crisis.

The really hard thing is devising the right form of regulation – the right form of regulation for financial markets, and the right form of regulation for applications on potentially vulnerable mobile networks.

Both tasks are tough. But the solution in each case surely involves greater transparency.

The creation of the Symbian Foundation is intended to advance openness in two ways:

  1. Providing more access to the source code;
  2. Providing greater visibility of the decisions and processes that guide changes in both the software platform and the management of the associated ecosystem.

This openness won’t dissolve all regulation. But it should ensure that the regulations evolve, more quickly, to something that more fully benefits the whole industry.

29 September 2008

Market failure and mobile operating systems

Filed under: Economics, fragmentation, leadership, market failure — David Wood @ 9:57 pm

In seeking to talk about economics, and about market failure, I’m moving way outside my depth. There seem to be so many different viewpoints about economics, each appearing reasonably plausible to me (when I read them in isolation), yet all contradicting each other. It’s a tough subject! What one writer sees as a market failure, another writer sees instead as a failure of individual actors within that market – and so on.

However, one of my correspondents has made a series of thought-provoking comments about market failure in the particular field of mobile operating systems. I believe these comments are sufficiently unusual and also sufficiently intriguing, to deserve consideration by a wider audience – despite my reticence to broach the subject of economics. The comments arose in the responses to an earlier posting of mine, “De-fragmenting the mobile operating system space“. To quote from these responses:

Currently, mobile developers withstand very high development costs due to a very fragmented mobile ecosystem, meanwhile mobile OSes enjoy a much lower developing cost than they would if they had to built compatible OSes between versions: therefore, a de-fragmentation process would move developing costs from mobile developers to mobile OS developers, that is, mobile OS companies/foundations would have to internalize those costs.

But developing an OS with full source or binary compatibility between versions is an order of magnitude more expensive than building one with broken compatibility, and it gets worse with time and versions. Moreover, building a de-fragmented mobile OS requires committing considerable resources (people, money, time) in the present, sunk costs that must have positive expected returns in the future (at least to cover developing costs and money opportunity costs).

Will foundations/consortiums (Symbian, LiMo, OHA/Android), given their non-profit nature, carry these investments in the present to obtain stable software platforms in the future? As Adam Smith wrote, displaying good will and hope is not enough: mobile foundations/consortiums/companies committing resources in the present must charge higher prices in their future and profit handsomely from their risky investments, otherwise the effort will stop…

To paraphrase:

  • Economic incentives on individual mobile operating systems will lead these operating systems to diverge from each other;
  • This divergence will mean that application developers (and providers of middleware, etc) will suffer greater difficulties, because of having to spread their efforts across larger numbers of diverse mobile operating systems;
  • As things stand, no one who is in a position to actually do something to reduce the divergence of mobile operating systems, has a sufficient financial incentive to make that happen;
  • So we can actually expect things to get worse for application developers, rather than better.

(I’m over-simplifying what my correspondent actually says; see the original for the full argument.)

Tentatively, I have the following answers:

  • I believe that things will actually get better for application developers, rather than worse;
  • It’s my experience that major players in the mobile phone industry can, on occasion, take actions based on strategy rather than business case;
  • This requires strong self-discipline on the part of these companies, but it’s not unknown;
  • Action on strategic grounds becomes more likely, the more clearly the argument is made that the actions that make sense in the short-term are actually detrimental to longer term interests;
  • The other key factor in this decision is whether the various actors can have a high degree of confidence in at least the medium-term viability of the software system they’re being asked to collectively support.

So, in line with what I’ve argued here, what we need to do is to keep on pushing (in creative and diverse ways) the merits of the case in favour of de-fragmenting mobile operating systems – and to keep on highlighting the positive features of the mobile operating systems (such as Symbian OS) that are most likely to enable at least medium-term success for the whole industry.

Incidentally, one big contribution that the shareholders and customers of Symbian are taking, towards that end, is to agree to standardise on S60 as the UI framework for the future. They’ve taken that decision, even though both UIQ and MOAP(S) have much to commend them as alternative UI frameworks on top of Symbian OS. They’ve taken that decision for the greater common good. New phones based on the UIQ and MOAP(S) UI frameworks will continue to appear for a while, during a transitional period, but the Symbian Foundation platform software is standardising on the S60 framework. Elements of both UIQ and MOAP(S) will be available inside the Symbian Foundation platform, but the resulting system will be compatible with S60, not UIQ or MOAP(S). That’s a decision that will bring some pain, but the shareholders and customers have been able to support it because:

  • S60 is now (in contrast to the earlier days) sufficiently flexible and mature, to support the kinds of user experience which previously were available only via UIQ or MOAP(S);
  • Indeed, S60 now has flexibility to support new types of UI experiences, whilst maintaining common underlying APIs;
  • Distribution of S60 will pass out of the hands of Nokia, into the hands of the independent Symbian Foundation.

I also believe that the disciplines of binary compatibility that have been built up in Symbian, over several years, are significantly reducing the difficulties faced by developers of add-ons and plug-ins for Symbian software. Because of this discipline, it now costs us less to maintain compatibility across different versions of our software. It’s true that some practical issues remain here, with surprising incompatibilities still occasionally arising in parts of the software stack on Symbian-powered phones other than Symbian OS itself. But progress is being made – and the leading nature of the Symbian platform will become clearer and clearer.

To finish, I’ll give my response to one more comment:

Samsung is gaining market share and producing S60 devices. Let’s suppose that Samsung starts snatching portions of market share from Nokia, do you really think anyone believes that Nokia would refrain from intentionally breaking compatibility to derail Samsung, if that would be necessary?

I can see that there might be some temptation towards such a behaviour, inside Nokia. But I don’t see that the outcome is inevitable. Nokia would have to weigh up various possible scenarios:

  • Symbian Foundation quality assurance tests will notice the compatibility breaks, and will refuse to give Symbian accreditation to this changed software;
  • The other licensees could create their own fork of the software (which would have official endorsement from the Symbian Foundation) and build up a lot of momentum behind it.

Instead, Nokia – like all users of Symbian Foundation software – should be inclined to seek differentiation by providing their own unique services on top of or alongside Symbian platform software rather than by illicitly modifying the platform software itself. That’s a far better way to deploy skilled software engineers.

« Newer Posts

Blog at WordPress.com.