dw2

15 April 2010

Accelerating automation and the future of work

Filed under: AGI, Economics, futurist, Google, politics, regulation, robots — David Wood @ 2:45 am

London is full of pleasant surprises.

Yesterday evening, I travelled to The Book Club in Shoreditch, EC2A, and made my way to the social area downstairs.  What’s your name? asked the person at the door.  I gave my name, and in return received a stick-on badge saying

Hi, I’m David.

Talk to me about the future of humanity!

I was impressed.  How do they know I like to talk to people about the future of humanity?

Then I remembered that the whole event I was attending was under the aegis of a newly formed group calling itself “Future Human“.  It was their third meeting, over the course of just a few weeks – but the first I had heard about (and decided to attend).  Everyone’s badge had the same message.  About 120 people crammed into the downstairs room – making it standing room only (since there were only around 60 seats).  Apart from the shortage of seats, the event was well run, with good use of roaming mikes from the floor.

The event started with a quick-fire entertaining presentation by author and sci-fi expert Sam Jordison.  His opening question was blunt:

What can you do that a computer can’t do?

He then listed lots of occupations from the past which technology had rendered obsolete.  Since one of my grandfathers was the village blacksmith, I found a personal resonance with this point.  It will soon be the same for many existing professions, Sam said: computers are becoming better and better at all sorts of tasks which previously would have required creative human input.  Journalism is particularly under threat.  Likewise accountancy.  And so on, and so on.

In general terms, that’s a thesis I agree with.  For example, I anticipate a time before long when human drivers will be replaced by safer robot alternatives.

I quibble with the implication that, as existing jobs are automated, there will be no jobs left for humans to do.  Instead, I see that lots of new occupations will become important.  “Shape of Jobs to Come”, a report (PDF) by Fast Future Research, describes 20 jobs that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

(See the original report for explanations of some of these unusual occupation names!)

In other words, as technology improves to remove existing occupations, new occupations will become significant – occupations that build in unpredictable ways on top of new technology.

But only up to a point.  In the larger picture, I agree with Sam’s point that even these new jobs will quickly come under the scope of rapidly improving automation.  The lifetime of occupations will shorten and shorten.  And people will typically spend fewer hours working each week (on paid tasks).

Is this a worry? Yes, if we assume that we need to work long hours, to justify our existence, or to earn sufficient income to look after our families.  But I disagree with these assumptions. Improved technology, wisely managed, should be able to result, not just in less labour left over for humans to do, but also in great material abundance – plenty of energy, food, and other resources for everyone.  We’ll become able – at last – to spend more of our time on activities that we deeply enjoy.

The panel discussion that followed touched on many of these points. The panellists – Peter Kirwan from Wired, Victor Henning from Mendeley, and Carsten Sorensen and Jannis Kallinikos from the London School of Economics – sounded lots of notes of optimism:

  • We shouldn’t create unnecessary distinctions between “human” and “machine”.  After all, humans are kinds of machines too (“meat machines“);
  • The best kind of intelligence combines human elements and machine elements – in what Google have called “hybrid intelligence“;
  • Rather than worrying about computers displacing humans, we can envisage computers augmenting humans;
  • In case computers become troublesome, we should be able to regulate them, or even to switch them off.

Again, in general terms, these are points I agree with.  However, I believe these tasks will be much harder to accomplish than the panel implied. To that extent, I believe that the panel were too optimistic.

After all, if we can barely regulate rapidly changing financial systems, we’ll surely find it even harder to regulate rapidly changing AI systems.  Before we’ve been able to work out if such-and-such an automated system is an improvement on its predecessors, that system may have caused too many rapid irreversible changes.

Worse, there could be a hard-to-estimate “critical mass” effect.  Rapidly accumulating intelligent automation is potentially akin to accumulating nuclear material until it unexpectedly reaches an irreversible critical mass.  The resulting “super cloud” system will presumably state very convincing arguments to us, for why such and such changes in regulations make great sense.  The result could be outstandingly good – but equally, it could be outstandingly bad.

Moreover, it’s likely to prove very hard to “switch off the Internet” (or “switch off Google”).  We’ll be so dependent on the Internet that we’ll be unable to disconnect it, even though we recognise there are bad consequences,

If all of this happens in slow motion, we would be OK.  We’d be able to review it and debug it in real time.  However, the lessons from the recent economic crisis is that these changes can take place almost too quickly for human governments to intervene.  That’s why we need to ensure, ahead of time, that we have a good understanding of what’s happeningAnd that’s why there should be lots more discussions of the sort that took place at Future Human last night.

The final question from the floor raised a great point: why isn’t this whole subject receiving prominence in the current UK general election debates?  My answer: It’s down to those of us who do see the coming problems to ensure that the issues get escalated appropriately.

Footnote: Regular readers will not be surprised if I point out, at this stage, that many of these same topics will be covered in the Humanity+ UK2010 event happening in Conway Hall, Holborn, London, on Saturday 24 April.  The panellists at the Future Human event were good, but I believe that the H+UK speakers will be even better!

24 December 2009

How markets fail – part two

Filed under: books, Economics, market failure, regulation — David Wood @ 2:46 am

Free markets have been a tremendous force for progress.  However, they need oversight and regulation.  Lack of appreciation of this point is the fundamental cause of the Great Crunch that the world financial systems recently experienced.  That’s the essential message of the important book by the New Yorker journalist John Cassidy (pictured right), “How markets fail: the logic of economic calamities“.

I call this book “important” because it contains a sweeping but compelling survey of a notion Cassidy dubs “Utopian economics”, before providing layer after layer of decisive critique of that notion.  As such, the book provides a very useful (if occasionally drawn out) guide to the history of economic thinking, covering Adam Smith, Friedrich Hayek, Milton Friedman, John Maynard Keynes, Arthur Pigou, Hyman Minsky, and many, many others.

The key theme in the book is that markets do fail from time to time, potentially in disastrous ways, and that some element of government oversight and intervention is both critical and necessary, to avoid calamity.  This theme is hardly new, but many people resist it, and the book has the merit of marshalling the arguments more comprehensively than I have seen elsewhere.

As Cassidy describes it, “utopian economics” is the widespread view that the self-interest of individuals and agencies, allowed to express itself via a free market economy, will inevitably produce results that are good for the whole economy.  The book starts with eight chapters that sympathetically outline the history of thinking about utopian economics.  Along the way, he regularly points out instances when free market champions nevertheless described cases when government intervention and control was required.  For example, referring to Adam Smith, Cassidy writes:

Smith and his successors … believed that the government had a duty to protect the public from financial swindles and speculative panics, which were both common in 18th and 19th century Britain…

To prevent a recurrence of credit busts, Smith advocated preventing banks from issuing notes to speculative lenders.  “Such regulations may, no doubt, be considered as in some respects a violation of natural liberty”, he wrote.  “But these exertions of the natural liberty of a few individuals, which might endanger the security of the whole society, are, and ought to be, restrained by the laws of all governments…  The obligation of building party walls [between adjacent houses], in order to prevent the communication of a fire, is a violation of natural liberty, exactly of the same kind with the regulations of the banking trade which are here proposed.”

The book identifies long-time Federal Reserve chairman Alan Greenspan as one of the villains of the Great Crunch.  Near the beginning of the book, Cassidy quotes a reply given by Greenspan to the question “Were you wrong” asked of him in October 2008 by the US House Committee on Oversight and Government Reform:

“I made a mistake in presuming that the self-interest of organizations, specifically banks and others, were such that they were best capable of protecting their own shareholders and their equity in the firms…”

Greenspan was far from alone in his belief in the self-correcting power of economies in which self-interest is allowed to flourish.  There were many reasons for people to hold that belief.  It appeared to be justified both theoretically and empirically.  As Greenspan remarked,

“I had been going for forty years, or more, with very considerable evidence that it was working exceptionally well.”

Cassidy devotes another eight chapters to reviewing the history of criticisms of utopian economics.  This part of the book is entitled “Reality-based economics“.  It is full of fascinating and enlightening material, covering topics such as:

  • game theory (“the prisoners dilemma”),
  • behavioural economics (pioneered by Daniel Kahneman and Amos Tversky) – including disaster myopia,
  • problems of spillovers and externalities (such as pollution) – which can only be fully addressed by centralised collective action,
  • drawbacks of hidden information and the failure of “price signalling”,
  • loss of competiveness when monopoly conditions are approached,
  • flaws in banking risk management policies (which drastically under-estimated the consequences of larger deviations from “business as usual”),
  • problems with asymmetric bonus structure,
  • and the perverse psychology of investment bubbles.

In summary, Cassidy lists four “illusions” of utopian economics:

  1. The illusion of harmony: that free markets always generate good outcomes;
  2. The illusion of stability: that free market economy is sturdy;
  3. The illusion of predictability: that distribution of returns can be foreseen;
  4. The illusion of Homo Economicus: that individuals are rational and act on perfect information.

The common theme of this section is that of “rational irrationality”: circumstances in which it is rational for people to choose courses of action that end up producing a bad outcome for society as a whole.  You can read more about “rational irrationality” in a recent online New Yorker article of the same name, written by Cassidy:

A number of explanations have been proposed for the great boom and bust, most of which focus on greed, overconfidence, and downright stupidity on the part of mortgage lenders, investment bankers, and Wall Street C.E.O.s. According to a common narrative, we have lived through a textbook instance of the madness of crowds. If this were all there was to it, we could rest more comfortably: greed can be controlled, with some difficulty, admittedly; overconfidence gets punctured; even stupid people can be educated. Unfortunately, the real causes of the crisis are much scarier and less amenable to reform: they have to do with the inner logic of an economy like ours. The root problem is what might be termed “rational irrationality”—behavior that, on the individual level, is perfectly reasonable but that, when aggregated in the marketplace, produces calamity.

Consider the [lending] freeze that started in August of 2007. Each bank was adopting a prudent course by turning away questionable borrowers and holding on to its capital. But the results were mutually ruinous: once credit stopped flowing, many financial firms—the banks included—were forced to sell off assets in order to raise cash. This round of selling caused stocks, bonds, and other assets to decline in value, which generated a new round of losses.

A similar feedback loop was at work during the boom stage of the cycle, when many mortgage companies extended home loans to low- and middle-income applicants who couldn’t afford to repay them. In hindsight, that looks like reckless lending. It didn’t at the time. In most cases, lenders had no intention of holding on to the mortgages they issued. After taking a generous fee for originating the loans, they planned to sell them to Wall Street banks, such as Merrill Lynch and Goldman Sachs, which were in the business of pooling mortgages and using the monthly payments they generated to issue mortgage bonds. When a borrower whose home loan has been “securitized” in this way defaults on his payments, it is the buyer of the mortgage bond who suffers a loss, not the issuer of the mortgage.

This was the climate that produced business successes like New Century Financial Corporation, of Orange County, which originated $51.6 billion in subprime mortgages in 2006, making it the second-largest subprime lender in the United States…

The book then provides a seven chapter blow-by-blow run through of the events of the Great Crunch itself.  Much of this material is familiar from recent news coverage and from other books, but the context provided by the prior discussion of utopian economics and reality-based economics provides new insight into the individual tosses and turns of the unfolding crisis.  It becomes clear that the roots of the crunch go back much further than the “subprime mortgage crisis”.

The more worrying conclusion is that many of the conditions responsible for the Great Crunch remain in place:

In the world of utopian economics, the latest crisis of capitalism is always a blip.

As memories of September 2008 fade, revisionism and disaster myopia will become increasingly common.  Many will say that the Great Crunch wasn’t so bad, downplaying the government intervention that prevented a much, much worse outcome.  Incentives for excessive risk-taking will revive, and so will the lobbying power of banks and other financial firms.  If these special interests succeed in blocking meaningful reform, we could well end up with the worst of all worlds.

As Cassidy explains:

It won’t be as easy to deal with the bouts of instability to which our financial system is prone. But the first step is simply to recognize that they aren’t aberrations; they are the inevitable result of individuals going about their normal business in a relatively unfettered marketplace. Our system of oversight fails to account for how sensible individual choices can add up to collective disaster. Rather than blaming the pedestrians for swarming the footway, governments need to reinforce the foundations of the structure, by installing more stabilizers. “Our system failed in basic fundamental ways,” Treasury Secretary Timothy Geithner acknowledged earlier this year. “To address this will require comprehensive reform. Not modest repairs at the margin, but new rules of the game.”

Despite this radical statement of intent, serious doubts remain over whether the Obama Administration’s proposed regulatory overhaul goes far enough in dealing with the problem of rational irrationality…

In his final chapter, addressing the question “What is to be done?“, Cassidy advocates a few specific proposals, ranging from the specific to the over-arching:

  • Banks that create and distribute mortgage securities should be forced to keep some of them on their books (perhaps as much as a fifth) – to make them monitor more closely the types of loan they purchase;
  • Mortgage brokers and mortgage lenders should be regulated at the federal level;
  • The government should outlaw stated-income loans, and enforce the existing fraud laws for mortgage applicants, which make it a crime to misrepresent your personal finances;
  • Wall Street needs taming … the more systemic risk an institution poses, the more tightly it should be controlled;
  • The Federal Reserve should set rules for Wall Street compensation and bonuses that all firms would have to follow … the aim must be to prevent rationally irrational behaviour.  Unless some restrictions are placed on people’s actions, they will inevitably revert to it.

Footnote: For more by John Cassidy, see his online blog.

29 October 2008

A market for different degrees of openness

Filed under: openness, regulation, Wireless Influencers — David Wood @ 3:52 am

To encourage participants to speak candidly, the proceedings at the Rutberg “Wireless Influencers” conferences are held away from the prying eyes of journalists. A few interesting ideas popped up during the discussions at the 2008 event over the last two days – but because of the confidentiality rules, I’m not able to name the people who raised these ideas (so I can’t give credit where credit is due).

The common theme of these ideas is the clash of openness and regulation – and (in some cases) the attempt to find creative solutions to this clash.

The first example arose during a talk by a representative from a major operator. The talk described the runaway success one of their products was experiencing in a third world country. This product involves the use of mobile phones to transfer money. The speaker said that the main reason this product could not be deployed in more developed countries (to address use cases like simplifying the payment of money to a teenage baby sitter, or transfering cash to your children) is the deadhand of financial regulations: banks aren’t keen to allow operators to take over some of the functions that have traditionally been restricted to banks, so operators are legally barred from deploying these applications.

I found this ironic. Normally operators are the companies that are criticised for setting up regulatory systems that have the effect of maintaining their control over various important business processes (and thereby preserving their profits). But in this case, it was an operator who was criticising another part of industry for self-interestedly sheltering behind regulations.

Later in the day, one of the streams at the event discussed whether operators could ever allow users to install whatever applications they want, on their phones. The analogy was made with the world of the PC: the providers of network services for PCs generally have no veto over the applications which users choose to install. On the other hand, in some enterprise situations, a corporate IS department may well wish to impose that kind of control. In other words, for PCs, there is a range of different degrees of openness, depending on the environment. So, could a similar range of different degrees of openness be set up for mobile phones?

The idea here is that several different networks could form. In some, the network operator would impose restrictions on the applications that can be installed on the phones. In others, the network operators would be more permissive. In the second kind of network, users would be told that it was their own responsibility to deal with any unintended consequences from applications they installed.

Ideally, a kind of market would be formed, for networks that had different degrees of openness. Then we could let normal market dynamics determine which sort of network would flourish.

Could such a market actually be formed? Could closed networks and open networks co-exist? It seems worth thinking about.

And here’s one more twist – from a keynote discussion on the second day of the event. Rather than a network operator (or some other central certification authority) deciding which applications are suitable for installation on users’ phones, how about using the power of community ratings to push bad applications way down the list of available applications?

That’s an intriguing Web 2.0 kind of idea. On a network operating with this principle, most users would only see apps that had already received positive reviews. Apps that had bad consequences would instead receive bad reviews – and would therefore disappear off the bottom of the list of apps displayed in response to search queries. “Just like on YouTube”.

A market for different degrees of openness

Filed under: openness, regulation, Wireless Influencers — David Wood @ 3:52 am

To encourage participants to speak candidly, the proceedings at the Rutberg “Wireless Influencers” conferences are held away from the prying eyes of journalists. A few interesting ideas popped up during the discussions at the 2008 event over the last two days – but because of the confidentiality rules, I’m not able to name the people who raised these ideas (so I can’t give credit where credit is due).

The common theme of these ideas is the clash of openness and regulation – and (in some cases) the attempt to find creative solutions to this clash.

The first example arose during a talk by a representative from a major operator. The talk described the runaway success one of their products was experiencing in a third world country. This product involves the use of mobile phones to transfer money. The speaker said that the main reason this product could not be deployed in more developed countries (to address use cases like simplifying the payment of money to a teenage baby sitter, or transfering cash to your children) is the deadhand of financial regulations: banks aren’t keen to allow operators to take over some of the functions that have traditionally been restricted to banks, so operators are legally barred from deploying these applications.

I found this ironic. Normally operators are the companies that are criticised for setting up regulatory systems that have the effect of maintaining their control over various important business processes (and thereby preserving their profits). But in this case, it was an operator who was criticising another part of industry for self-interestedly sheltering behind regulations.

Later in the day, one of the streams at the event discussed whether operators could ever allow users to install whatever applications they want, on their phones. The analogy was made with the world of the PC: the providers of network services for PCs generally have no veto over the applications which users choose to install. On the other hand, in some enterprise situations, a corporate IS department may well wish to impose that kind of control. In other words, for PCs, there is a range of different degrees of openness, depending on the environment. So, could a similar range of different degrees of openness be set up for mobile phones?

The idea here is that several different networks could form. In some, the network operator would impose restrictions on the applications that can be installed on the phones. In others, the network operators would be more permissive. In the second kind of network, users would be told that it was their own responsibility to deal with any unintended consequences from applications they installed.

Ideally, a kind of market would be formed, for networks that had different degrees of openness. Then we could let normal market dynamics determine which sort of network would flourish.

Could such a market actually be formed? Could closed networks and open networks co-exist? It seems worth thinking about.

And here’s one more twist – from a keynote discussion on the second day of the event. Rather than a network operator (or some other central certification authority) deciding which applications are suitable for installation on users’ phones, how about using the power of community ratings to push bad applications way down the list of available applications?

That’s an intriguing Web 2.0 kind of idea. On a network operating with this principle, most users would only see apps that had already received positive reviews. Apps that had bad consequences would instead receive bad reviews – and would therefore disappear off the bottom of the list of apps displayed in response to search queries. “Just like on YouTube”.

11 October 2008

Serious advice to developers in tough times

Filed under: Economics, FOWA, openness, regulation — David Wood @ 6:57 pm

As I mentioned in my previous article, the FOWA London event on “The Future of Web Apps” featured a great deal of passion and enthusiasm for technology and software development systems. However, as I watched the presentations on Day Two, I was repeatedly struck by a deeper level of seriousness.

For example, AMEE Director Gavin Starks urged the audience to consider how changes in their applications could help reduce CO2 emissions. AMEE has exceptionally large topics on its mind: the acronym stands for “Avoiding Mass Extinctions Engine“. Gavin sought to raise the aspiration level of developers: “If you really want to build an app that will change the world, how about building an app that will save the Earth?” But this talk was no pious homily: it contained several dozen ideas that could in principle act as possible starting points for new business ventures.

On a different kind of serious topic, Mahalo.com CEO Jason Calacanis elicited some gasps from the audience when he dared to suggest that, if startups are really serious about making a big mark in the business world, they should consider firing, not only their “average” employees, but also their “good” employees – under the rationale that “good is the enemy of the great“. The resulting audience Q&A could have continued the whole afternoon.

But the most topical presentation was the opening keynote by Sun Microsystems Distinguished Engineer Tim Bray. It started with a bang – with the words “I’m Scared” displayed in huge font on the screen.

With these words, Tim announced that he had, the previous afternoon, torn up the presentation he was previously planning to give – a presentation entitled “What to be Frightened of in Building A Web Application“.

Tim explained that the fear he would now address in his talk was about global economic matters rather than about usage issues with the likes of XML, Rails, and Flash. Instead of these technology-focused matters, he would instead cover the subject “Getting through the tough times“.

Tim described how he had spent several days in London ahead of the conference, somewhat jet lagged, watching lots of TV coverage about the current economic crisis. As he said, the web has the advantage of allowing everyone to get straight to the sources – and these sources are frightening, when you take the time to look at them. Tim explicitly referenced http://acrossthecurve.com/?p=1830, which contains the following gloomy prognosis:

…more and more it seems likely that the resolution of this crisis will be an historic financial calamity. Each and every step which central banks and regulators have taken to resolve the crisis has been met with failure. In the beginning, the steps would produce some brief stability.

In the last several days, the US Congress (belatedly) passed a bailout bill, the Federal Reserve has guaranteed commercial paper and in unprecedented coordination central banks around the globe slash base lending rates. Listen to the markets respond.

The market scoffs as Libor rises, stocks plummet and IBM is forced to pay usurious rates to borrow. There is no stability and no hiatus from the pain. It continues unabated in spite of the best efforts of dedicated people to solve it.

We are in the midst of an unfolding debacle. It is happening about us. I am not sure how or when it ends, but the end, when it arrives, will radically alter the way we live for a long time.

Whoever wins the US election and takes office in January will need prayers and divine intervention.

As Tim put it: “We’ve been running on several times the amount of money that actually exists. Now we’re going to have to manage on nearer the amount of money that does exist.” And to make things even more colourful, he said that the next few days could be like the short period of time in New Orleans after hurricane Katrina had passed, but before the floods struck (caused by damage brought about by the winds). For the world’s economy, the hurricane may have passed, but the flood is still to come.

The rest of Tim’s talk was full of advice that sounded, to me, as highly practical, for what developers should do, to increase their chances of survival through these tough times. (There’s a summary video here.) I paraphrase some highlights from my notes:

Double down and do a particularly good job. In these times, slack work could put your company out of business – or could cause your employer to decide your services are no longer necessary.

Large capital expenditures are a no-no. Find ways to work that don’t result in higher management being asked to sign large bills – they won’t.

Waterfalls are a no-no. No smart executive is going to commit to a lengthy project that will take longer than a year to generate any payback. Instead, get with the agile movement – pick out the two or three requirements in your project that you can deliver incrementally and which will result in payback in (say) 8-10 weeks.

Software licences are a no-no. Companies will no longer make large commitments to big licences for the likes of Oracle solutions. Open source is going to grow in prominence.

Contribute to open source projects. This is a great way to build professional credibility – to advertise your capabilities to potential new employers or business partners.

Get in the cloud. With cloud services, you only pay a small amount in the beginning, and you only pay larger amounts when traffic is flowing.

Stop believing in technology religions. The web is technologically heterogeneous. Be prepared to learn new skills, to adopt new programming languages, or to change the kinds of applications you develop.

Think about the basic needs of users. There will be less call for applications about fun things, or about partying and music. There will be more demand for applications that help people to save money – for example, the lowest gas bill, or the cheapest cell phone costs.

Think about telecomms. Users will give up their HDTVs, their SUVs, and their overseas holidays, but they won’t give up their cell phones. The iPhone and the Android are creating some great new opportunities. Developers of iPhone applications are earning themselves hundreds of thousands of dollars from applications that cost users only $1.99 per download. Developers in the audience should consider migrating some of their applications to mobile – or creating new applications for mobile.

The mention of telecomms quickened my pulse. On the one hand, I liked Tim’s emphasis on the likely continuing demand for high-value low-cost mobile solutions. On the other hand, I couldn’t help noticing there were references to iPhone and Android, but not to Symbian (or to any of the phone manufacturers who are using Symbian software).

Then I reflected that, similarly, namechecks were missing for RIM, Windows Mobile, and Palm. Tim’s next words interrupted this chain of thought and provided further explanation: With the iPhone and Android, no longer are the idiotic moronic mobile network operators standing in the way with a fence of barbed wire between developers and the people who actually buy phones.

This fierce dislike for network operator interference was consistent with a message underlying the whole event: developers should have the chance to show what they can do, using their talent and their raw effort, without being held up by organisational obstacles and value-chain choke-points. Developers dislike seemingly arbitrary regulation. That’s a message I take very seriously.

However, we can’t avoid all regulation. Indeed – to turn back from applications to economics – lack of regulation is arguably a principal cause of our current economic crisis.

The really hard thing is devising the right form of regulation – the right form of regulation for financial markets, and the right form of regulation for applications on potentially vulnerable mobile networks.

Both tasks are tough. But the solution in each case surely involves greater transparency.

The creation of the Symbian Foundation is intended to advance openness in two ways:

  1. Providing more access to the source code;
  2. Providing greater visibility of the decisions and processes that guide changes in both the software platform and the management of the associated ecosystem.

This openness won’t dissolve all regulation. But it should ensure that the regulations evolve, more quickly, to something that more fully benefits the whole industry.

Blog at WordPress.com.