dw2

26 March 2010

In praise of the public domain

Filed under: books, Intellectual property — David Wood @ 1:20 pm

James Boyle’s book “The public domain: enclosing the commons of the mind” is an extremely thoughtful, carefully written account of a major issue.

The issue is that a very powerful and useful resource is, largely unwittingly, being deeply damaged by people following actions that on the surface seem to make sense.

The resource in question can be called “the public domain”.  It’s the general set of ideas, designs, and artforms which we can all build on, to create yet new ideas, designs, and artforms.  Everyone who writes a book or blog article, draws a picture, composes music, designs a product, or proposes a new scientific theory, takes all kind of advantage of this public domain.  However, this domain is under increasing attack.

The attack comes as a result of concepts of “intellectual property” being applied too aggressively.  In this line of thinking:

  1. People generally need incentives to undertake arduous work to create new ideas, designs, and artforms;
  2. An important aspect of incentive is expected monetary reward from expected benefits from these ideas, designs, and artforms;
  3. Any technology or social practice that undercuts this potential monetary reward is, therefore, suspect;
  4. In particular, the growing ability of computer tools and Internet distribution means that we need to tighten laws governing copying;
  5. Otherwise, the incentive to create new ideas, designs, and artforms will be undermined;
  6. As a result, laws about copyrights and patents need to be extended.

Of course, Boyle is far from being the first person to criticise this general line of thinking.  However, what makes Boyle’s book stand out is the balance which he brings.

Boyle carefully explains the merits of the arguments in favour of intellectual property, as well as the merits of the arguments against extending laws about intellectual property.  He believes there is substantial benefit from patents and copyrights continuing to exist.  As he notes in the final chapter of his book:

If the answer were that intellectual property rights are bad, then forming good policy would be easy. But that is as silly and one-sided an idea as the maximalist one I have been criticizing here. Here are … examples:

1. Drug patents do help produce drugs. Jettisoning them is a bad idea—though experimenting with additional and alternative methods of encouraging medical innovation is a very good one.

2. I believe copyrights over literary works should be shorter, and that one should have to renew them after twenty-eight years—something that about 85 percent of authors and publishers will not do, if prior history is anything to go by. I think that would give ample incentives to write and distribute books, and give us a richer, more accessible culture and educational system to boot, a Library of Congress where you truly can “click to get the book” as my son asked me to do years ago now. But that does not mean that I wish to abolish copyright. On the contrary, I think it is an excellent system…

(The text of the entire book is available free online, under a creative commons licence.)

But Boyle also argues, persuasively, throughout his book, that there need to be limits to the application of ideas about intellectual property.  As summarised at the start of the ninth chapter:

It is a mistake to think of intellectual property in the same way we think of physical property…

Limitations and exceptions to those rights are as important as the rights themselves…

The public domain has a vital and tragically neglected role to play in innovation and culture…

Relentlessly expanding property rights will not automatically bring us increased innovation in science and culture…

The second enclosure movement is more troubling than the first…

It is unwise to extend copyright again and again, and to do so retrospectively, locking up most of twentieth-century culture in order to protect the tiny fragment of it that is still commercially available…

Technological improvements bring both benefits and costs to existing rights holders—both of which should be considered when setting policy…

We need a vigorous set of internal limitations and exceptions within copyright, or control over content will inevitably become control over the medium of transmission…

The Internet should make us think seriously about the power of nonproprietary and distributed production…

Perhaps the most powerful argument in the list above is the third one: we need a healthy public domain, for the good of all of us – so that new ideas, designs, and artforms can continue to be developed.  To make this argument more vivid, Boyle builds an intriguing analogy:

In a number of respects, the politics of intellectual property and the public domain is at the stage that the American environmental movement was at in the 1950s.

In 1950, there were people who cared strongly about issues we would now identify as “environmental”—supporters of the park system and birdwatchers, but also hunters and those who disdained chemical pesticides in growing their foods. In the world of intellectual property, we have start-up software engineers, libraries, appropriationist artists, parodists, biographers, and biotech researchers. In the 50s and 60s, we had flurries of outrage over particular crises—burning rivers, oil spills, dreadful smog. In the world of intellectual property, we have the kind of stories I have tried to tell here. Lacking, however, is a general framework, a perception of common interest in apparently disparate situations.

Crudely speaking, the environmental movement was deeply influenced by two basic analytical frameworks. The first was the idea of ecology: the fragile, complex, and unpredictable interconnections between living systems. The second was the idea of welfare economics—the ways in which markets can fail to make activities internalize their full costs. The combination of the two ideas yielded a powerful and disturbing conclusion. Markets would routinely fail to make activities internalize their own costs, particularly their own environmental costs. This failure would, routinely, disrupt or destroy fragile ecological systems, with unpredictable, ugly, dangerous, and possibly irreparable consequences. These two types of analysis pointed to a general interest in environmental protection and thus helped to build a large constituency which supported governmental efforts to that end. The duck hunter’s preservation of wetlands as a species habitat turns out to have wider functions in the prevention of erosion and the maintenance of water quality. The decision to burn coal rather than natural gas for power generation may have impacts on everything from forests to fisheries. The attempt to reduce greenhouse gases and mitigate the damage from global warming cuts across every aspect of the economy.

Of course, it would be silly to think that environmental policy was fueled only by ideas rather than more immediate desires. As William Ruckelshaus put it, “With air pollution there was, for example, a desire of the people living in Denver to see the mountains again. Similarly, the people living in Los Angeles had a desire to see one another.” Funnily enough, as with intellectual property, changes in communications technology also played a role. “In our living rooms in the middle sixties, black and white television went out and color television came in. We have only begun to understand some of the impacts of television on our lives, but certainly for the environmental movement it was a bonanza. A yellow outfall flowing into a blue river does not have anywhere near the impact on black and white television that it has on color television; neither does brown smog against a blue sky.” More importantly perhaps, the technologically fueled deluge of information, whether from weather satellites or computer models running on supercomputers, provided some of the evidence that—eventually—started to build a consensus around the seriousness of global warming.

Despite the importance of these other factors, the ideas I mentioned—ecology and welfare economics—were extremely important for the environmental movement. They helped to provide its agenda, its rhetoric, and the perception of common interest underneath its coalition politics. Even more interestingly, for my purposes, those ideas—which began as inaccessible scientific or economic concepts, far from popular discourse—were brought into the mainstream of American politics. This did not happen easily or automatically. Popularizing complicated ideas is hard work. There were popular books, television discussions, documentaries on Love Canal or the California kelp beds, op-ed pieces in newspapers, and pontificating experts on TV. Environmental groups both shocking and staid played their part, through the dramatic theater of a Greenpeace protest or the tweedy respectability of the Audubon Society. Where once the idea of “the Environment” (as opposed to “my lake,” say) was seen as a mere abstraction, something that couldn’t stand against the concrete benefits brought by a particular piece of development, it came to be an abstraction with both the force of law and of popular interest behind it.

To me, this suggests a strategy for the future of the politics of intellectual property, a way to save our eroding public domain.

In both areas, we seem to have the same recipe for failure in the structure of the decision-making process. Democratic decisions are made badly when they are primarily made by and for the benefit of a few stakeholders, whether industrialists or content providers. This effect is only intensified when the transaction costs of identifying and resisting the change are high. Think of the costs and benefits of acid rain-producing power generation or—less serious, but surely similar in form—the costs and benefits of retrospectively increasing copyright term limits on works for which the copyright had already expired, pulling them back out of the public domain…

How important are these issues?

We can all laugh at the famous xkcd stick figure cartoon lamenting “sometimes I just can’t get outraged over copyright law”.

But Boyle writes persuasively on this topic too:

Who can blame the stick figure? Certainly not I. Is it not silly to equate the protection of the environment with the protection of the public domain? After all, one is the struggle to save a planetary ecology and the other is just some silly argument about legal rules and culture and science. I would be the first to yield primacy to the environmental challenges we are facing. Mass extinction events are to be avoided, particularly if they involve you personally. Yet my willingness to minimize the importance of the rules that determine who owns science and culture goes only so far.

A better intellectual property system will not save the planet. On the other hand, one of the most promising sets of tools for building biofuels comes from synthetic biology. Ask some of the leading scientists in that field why they devoted their precious time to trying to work out a system that would offer the valuable incentives that patents provide while leaving a commons of “biobricks” open to all for future development. I worry about these rules naturally; they were forced to do so.

A better intellectual property system certainly will not end world hunger. Still it is interesting to read about the lengthy struggles to clear the multiple, overlapping patents on GoldenRice—a rice grain genetically engineered to cure vitamin deficiencies that nearly perished in a thicket of blurrily overlapping rights.

A better intellectual property system will not cure AIDS or rheumatoid arthritis or Huntington’s disease or malaria. Certainly not by itself. Patents have already played a positive role in contributing to treatments for the first two, though they are unlikely to help much on the latter two; the affected populations are too few or too poor. But overly broad, or vague, or confusing patents could (and I believe have) hurt all of those efforts—even those being pursued out of altruism. Those problems could be mitigated. Reforms that made possible legal and facilitated distribution of patented medicines in Africa might save millions of lives. They would cost drug companies little. Africa makes up 1.6 percent of their global market. Interesting alternative methods have even been suggested for encouraging investment in treatments for neglected diseases and diseases of the world’s poor. At the moment, we spend 90 percent of our research dollars on diseases that affect 10 percent of the global population. Perhaps this is the best we can do, but would it not be nice to have a vigorous public debate on the subject?…

As for myself, I’ve already listed as one of my proposed high priorities for society over the next decade,

  • Patent system reform – to address aspects of intellectual property law where innovation and collaboration are being hindered rather than helped

Boyle’s book is a great contribution to the cause of finding the best “sweet spot” balance between intellectual property and the public domain.  It deserves to be very widely read.

Footnote: Many thanks to Martin Budden for drawing my attention to this book.

18 March 2010

Animal spirits – a richer understanding of economics

Filed under: books, Economics, irrationality, recession — David Wood @ 3:41 pm

It’s no secret that some of the fundamental assumptions of economic theory are faulty.

Specifically, the primary model in economics is that individuals invariably take actions which make good economic sense.  The mythical “Homo economicus” (“Economic Man”) is motivated at all times:

  • To purchase goods and services that have lower cost;
  • To create goods and services that they can sell at higher price;
  • To minimise the amount of effort that they have to expend to create these goods and services.

Real world people, of course, deviate from this model in numerous ways.  Lots of other things motivate us, beyond purely economic concerns.

Indeed, we can arrange human decisions on a two-by-two matrix:

  • On one dimension, decisions vary between economic motivations and non-economic movitations;
  • On the other dimension, decisions vary between rational and irrational.

Theories of classical economics take their lead from just one of the resulting four fields of life – the field of economic motivations that are pursued rationally.  But what impact do the other three fields have on overall economic questions, such as booms and busts, inflation, employment, savings, and inequality?

Many classicial economists give the strong impression that these other three fields have limited impact – somehow their effects average out, or can be discounted.  More recently, the rise of behavioural economics has challenged this conclusion, by increasingly providing evidence and analysis of factors such as:

  • Irrational biases in human decision making;
  • Herd mentality;
  • Limits of information;
  • The motivational importance of factors other than economic ones.

The best account I’ve encountered of this whole topic is the book “Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism“.

This book was authored last year by two eminent economists, George A. Akerlof and Robert J. Shiller.  Their phrase “Animal Spirits” is taken from Keynes – from a part of the thinking of Keynes that, they believe, has been too often neglected (even by people who describe themselves as followers of Keynes):

The markets are moved by animal spirits, and not by reason

(paraphrased from Keynes’ 1935 book “The General Theory of Employment Interest and Money”)

Akerlof and Shiller provide five chapters that explain each of five important contributors to “animal spirits”:

  • Confidence and Its Multipliers
  • Fairness
  • Corruption and Bad Faith
  • Money Illusion
  • Stories.

These explanations interweave many accounts of economic episodes over the decades, adding to the plausibility of the fact that these factors matter a great deal.

Next, Akerlof and Shiller show how considerations of these “animal spirits” provide deeper insight into each of eight key questions of economic theory:

  • Why Do Economies Fall into Depression?
  • Why Do Central Bankers Have Power over the Economy (Insofar as They Do)?
  • Why Are There People Who Cannot Find a Job?
  • Why Is There a Trade-off between Inflation and Unemployment in the Long Run?
  • Why Is Saving for the Future So Arbitrary?
  • Why Are Financial Prices and Corporate Investments So Volatile?
  • Why Do Real Estate Markets Go through Cycles?
  • Why Is There Special Poverty among Minorities?

To my mind, the analysis is devastating: any serious discussion of eonomics needs to take account of these findings.

Footnote: Amazon.com contains a whole series of nasty and devious reviews of this book.  Don’t be misled by them!  The motivations of the people writing these reviews would be a worthy subject for an analysis in its own right.  There are other kinds of “animal spirits” afoot here.

27 February 2010

Achieving a 130-fold improvement in 40 years

Filed under: books, Economics, green, Kurzweil, RSA, solar energy, sustainability — David Wood @ 3:23 pm

One reason I like London so much is the quality of debate and discussion that takes place, at least three times most weeks, at the RSA.

The full name of this organisation is “the Royal Society for the encouragement of Arts, Manufactures and Commerce“.  It’s been holding meetings since 1754.  Early participants included Benjamin Franklin, Samuel Johnson, and Richard Arkwright.

Recently, there have been several RSA meetings addressing the need for significant reform of how the global economy operates.  Otherwise, these speakers imply, the future will be much bleaker than the present.

On Wednesday, Professor Tim Jackson of the University of Surrey led a debate on the question “Is Prosperity Without Growth Possible?”  Professor Jackson recently authored the book “Prosperity Without Growth: Economics for a Finite Planet“.  The book contains an extended version of his remarks at the debate.

I find myself in agreement a great deal of what the book says:

  • Continuous economic growth is a shallow and, by itself, dangerous goal;
  • Beyond an initial level, greater wealth has only a weak correlation with greater prosperity;
  • Greater affluence can bring malaise – especially in countries with significant internal inequalities;
  • Consumers frequently find themselves spending money they don’t have, to buy new goods they don’t really need;
  • The recent economic crisis provides us with an important opportunity to reflect on the operation of economics;
  • “Business as usual” is not a sustainable answer;
  • There is an imperative to consider whether society can operate without its existing commitment to regular GDP growth.

What makes this book stand out is its recognition of the enormous practical problems in stopping growth.  Both growth and de-growth face significant perils.  As the start of chapter 12 of the book states:

Society is faced with a profound dilemma.  To resist growth is to risk economic and social collapse.  To pursue it relentlessly is to endanger the ecosystems on which we depend for long-term survival.

For the most part, this dilemma goes unrecognised in mainstream policy…  When reality begins to impinge on the collective consciousness, the best suggestion to hand is that we can somehow ‘decouple‘ growth from its material impacts…

The sheer scale of this task is rarely acknowledged.  In a world of 9 billion people all aspiring to western lifestyles, the carbon intensity of every dollar of output must be at least 130 times lower in 2050 than it it today…

Never mind that no-one knows what such an economy looks like.  Never mind that decoupling isn’t happening on anything like that scale.  Never mind that all our institutions and incentive structures continually point in the wrong direction.  The dilemma, once recognised, looms so dangerously over our future that we are desperate to believe in miracles.  Technology will save us.  Capitalism is good at technology…

This delusional strategy has reached its limits.  Simplistic assumptions that capitalism’s propensity for efficiency will stabilise the climate and solve the problem of resource scarcity are almost literally bankrupt.  We now stand in urgent need of a clearer vision, braver policy-making, something more robust in the way of a strategy with which to confront the dilemma of growth.

The starting point must be to unravel the forces that keep us in damaging denial.  Nature and structure conspire together here.  The profit motive stimulates a continual search for newer, better or cheaper products and services.  Our own relentless search for novelty and social status locks us into an iron cage of consumerism.  Affluence itself has betrayed us.

Affluence breeds – and indeed relies on – the continual production and reproduction of consumer novelty.  But relentless novelty reinforces anxiety and weakens our ability to protect long-term social goals.  In doing so it ends up undermining our own well-being and the well-being of those around us.  Somewhere along the way, we lose the shared prosperity we sought int he first place.

None of this is inevitable.  We can’t change ecological limits.  We can’t alter human nature.  But we can and do create and recreate the social world. Its norms are our norms.  Its visions are our visions.  Its structures and institutions shape and are shaped by those norms and visions.  This is where transformation is needed…

As I said, I find myself in agreement a great deal of what the book says.  The questions raised in the book deserve a wide hearing.  Society needs higher overarching goals than merely increasing our GDP.  Society needs to focus on new priorities, which take into account the finite nature of the resources available to us, and the risks of imminent additional ecological and economic disaster.

However, I confess to being one of the people who believe (with some caveats…) that “technology will save us“.  Let’s look again at this figure of a 130-fold descrease needed, between now and 2050.

The figure of 130 comes from a calculation in chapter 5 of the book.  I have no quibble with the figure.  It comes from the Paul Ehrlich equation

I = P * A * T

where:

  • I is the impact on the environment resulting from consumption
  • P is the population
  • A is the consumption or income level per capita (affluence)
  • T is the technological intensity of economic output.

Jackson’s book considers various scenarios.  Scenario 4 assumes a global population of 9 billion by 2050, all enjoying a lifestyle equivalent to that of the average EU citizen, which has grown by the modest amount of only 2% per annum over the intervening 40 years.  To bring down today’s I level for carbon intensity of economic level, to that seen by the IPCC as required to avoid catastrophic climate change, will require a 130-fold reduction in T in the meantime.

How feasible is an improvement factor of 130 in technology, over the next 40 years?  How good is the track record of technology at solving this kind of problem?

Some of the other speakers at the RSA event were hesitant to make any predictions for a 40 year time period.  They noted that history has a habit of making this kind of prediction irrelevant.  Jackson’s answer is that since we have little confidence of making a significant change in T, we should look to ways to reduce A.  Jackson is also worried that recent talk of a ‘Green New Deal’:

  • Is still couched in language of economic growth, rather than improvement in prosperity;
  • Has seen little translation into action, since first raised during 2008-9.

My own answer is that 130 represents just over 7 doublings (2 raised to the 7th power is 128) and that at least some parts of technology have no problems in improving by seven doubling generations over 40 years.  Indeed, taking two years as the usual Moore’s Law doubling period, for improvements in semiconductor density, would require only 14 years for this kind of improvement, rather than 40.

To consider how Moore’s Law improvements could transform the energy business, radically reducing its carbon intensity, here are some remarks by futurist Ray Kurzweil, as reported by LiveScience Senior Editor Robin Lloyd:

Futurist and inventor Ray Kurzweil is part of distinguished panel of engineers that says solar power will scale up to produce all the energy needs of Earth’s people in 20 years.

There is 10,000 times more sunlight than we need to meet 100 percent of our energy needs, he says, and the technology needed for collecting and storing it is about to emerge as the field of solar energy is going to advance exponentially in accordance with Kurzweil’s Law of Accelerating Returns. That law yields a doubling of price performance in information technologies every year.

Kurzweil, author of “The Singularity Is Near” and “The Age of Intelligent Machines,” worked on the solar energy solution with Google Co-Founder Larry Page as part of a panel of experts convened by the National Association of Engineers to address the 14 “grand challenges of the 21st century,” including making solar energy more economical. The panel’s findings were announced here last week at the annual meeting of the American Association for the Advancement of Science.

Solar and wind power currently supply about 1 percent of the world’s energy needs, Kurzweil said, but advances in technology are about to expand with the introduction of nano-engineered materials for solar panels, making them far more efficient, lighter and easier to install. Google has invested substantially in companies pioneering these approaches.

Regardless of any one technology, members of the panel are “confident that we are not that far away from a tipping point where energy from solar will be [economically] competitive with fossil fuels,” Kurzweil said, adding that it could happen within five years.

The reason why solar energy technologies will advance exponentially, Kurzweil said, is because it is an “information technology” (one for which we can measure the information content), and thereby subject to the Law of Accelerating Returns.

“We also see an exponential progression in the use of solar energy,” he said. “It is doubling now every two years. Doubling every two years means multiplying by 1,000 in 20 years. At that rate we’ll meet 100 percent of our energy needs in 20 years.”

Other technologies that will help are solar concentrators made of parabolic mirrors that focus very large areas of sunlight onto a small collector or a small efficient steam turbine. The energy can be stored using nano-engineered fuel cells, Kurzweil said.

“You could, for example, create hydrogen or hydrogen-based fuels from the energy produced by solar panels and then use that to create fuel for fuel cells”, he said. “There are already nano-engineered fuel cells, microscopic in size, that can be scaled up to store huge quantities of energy”, he said…

To be clear, I don’t see any of this as inevitable.  The economy as a whole could falter again, jeopardising “Kurzweil’s Law of Accelerating Returns”.  Less dramatically, Moore’s Law could run out of steam, or it might prove harder than expected to apply silicon improvements in systems for generating, storing, and transporting energy.  I therefore share Professor Jackson’s warning that capitalism, by itself, cannot be trusted to get the best out of technology.  That’s why this debate is particularly important.

1 February 2010

On the undue adulation for ‘You are not a gadget’

Filed under: books, collaboration, Open Source — David Wood @ 12:46 pm

Perhaps the most disturbing thing about Jaron Lanier’s new book “You are not a gadget: a manifesto” is the undue adulation it has received.

For example, here’s what eminent theoretical physicist Lee Smolin says about the book (on its back cover):

Jaron Lanier’s long-awaited book is fabulous – I couldn’t put it down and shouted out Yes! Yes! on many pages.

Smolin goes on:

Lanier is a rare voice of sanity in the debate about the relationship between computers and we human beings.  He convincingly shows us that the idea of digital computers having human-like intelligence is a fantasy.

However, when I read it, far from shouting out Yes! Yes! on many pages, the thoughts that repeatedly came to my mind were: No! No! What a misunderstanding! What a ridiculous straw man! How poor! How misleading!

The titles of reviews of Lanier’s book on Amazon.com show lots more adulation:

  • A brilliant work of Pragmatic “Techno-Philosophy” (a five-star review)
  • Thought provoking and worthy of your time (ditto)
  • One of the best books in a long while (ditto)
  • A tribute to humanity (ditto)

That last title indicates what is probably going on.  Many people feel uneasy that “humanity” is seemingly being stretched, trampled, lost, and reduced, by current changes in our society – including the migration of so much culture online, and the increasing ubiquity of silicon brains.  So they are ready to clutch at straws, with the hope of somehow reaffirming a more natural state of humanity.

But this is a bad straw to clutch at.

Interestingly, even one of the five star reviews has to remark that there are significant mistakes in Lanier’s account:

While my review remains positive, I want to point out one major problem in the book. The account of events on p. 125-126 is full of misinformation and errors. The LISP machine in retrospect was a horrible idea. It died because the RISC and MIPS CPU efforts on the west coast were a much better idea. Putting high-level software (LISP) into electronics was a bad idea.

Stallman’s disfunctional relationship with Symbolics is badly misrepresented. Stallman’s licence was not the first or only free software licence…

My own list of the misinformation and errors in this book would occupy many pages.  Here’s just a snippet:

1. The iPhone and UNIX

Initially, I liked Lanier’s account of the problems caused by lock-in.  But then (page 12) he complains, incredibly, that some UI problems on the iPhone are due to the fact that the operating system on the iPhone has had to retain features from UNIX:

I have an iPhone in my pocket, and sure enough, the thing has what is essentially UNIX in it.  An unnerving element of this gadget is that it is haunted by a weird set of unpredictable user interface delays.  One’s mind waits for the response to the press of a virtual button, but it doesn’t come for a while.  An odd tension builds during that moment, and easy intuition is replaced by nervousness.  It is the ghost of UNIX, still refusing to accommodate the rhythms of my body and my mind, after all these years.

As someone who has been involved for more than 20 years with platforms that enable UI experience, I can state categorically that delays in UI can be addressed at many levels.  It is absurb to suggest that a hangover from UNIX days means that all UIs on mobile devices (such as the iPhone) are bound to suffer unnerving delays.

2. Obssession with anonymous posters

Time and again Lanier laments that people are encouraged to post anonymously to the Internet.  Because people have to become anonymous, they are de-humanised.

My reaction:

  • It is useful that the opportunity for anonymous posting exists;
  • However, in the vast bulk of the discussions in which I participate, most people sign their names, and links are available to their profiles;
  • Rather than a sea of anonymous interactions, there’s a sea of individuals ready to become better known, each with their own fascinating quirks and strengths.

3. Lanier’s diatribe against auto-layout features in Microsoft Word

Lanier admits (page 27) that he is “all for the automation of petty tasks” by software.  But (like most of us) he’s had the experience where Microsoft Word makes a wrong decision about an automation it presumes we want to do:

You might have had the experience of having Microsoft Word suddenly determine, at the wrong moment, that you are creating an indented outline…

This type of design feature is nonsense, since you end up having to do more work than you would otherwise in order to manipulate the software’s expectations of you.

Most people would say this just shows that there are still bugs in the (often useful) auto-layout feature.  Not so Lanier.  Instead, incredibly, he imputes a sinister motivation onto the software’s designers:

The real [underlying] function of the feature isn’t to make life easier for people.  Instead, it promotes a new philosophy: that the computer is evolving into a life-form that can understand people better than people can understand themselves.

Lanier insists there’s a dichotomy: either a software designer is trying to make tasks easier for users, or the software designer has views that computers will, one day, be smarter than humans.  Why would the latter view (if held) mean the former cannot also be true? And why is “this type of design feature” nonsense?

4. Analysis of Alan Turing

Lanier’s analysis (and psycho-analysis) of AI pioneer Alan Turing is particularly cringe-worthy, and was the point where, for me, the book lost all credibility.

For example, Lanier tries to score points against Turing by commenting (page 31) that:

Turing’s 1950 paper on the test includes this extraordinary passage: “In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates”.

However, referring to the context (Turing’s paper is available online here) indicates that Turing is, in the quoted passage, in the midst of seeking to engage with a number of different objections to his main hypothesis.  Each time, he seeks to enter into the mindset of people who might oppose his thinking.  This extract is from the section “The Theological Objection”.  Immediately after the section highlighted by Lanier, Turing’s paper goes on to comment:

However, this is mere speculation. I am not very impressed with theological arguments whatever they may be used to support. Such arguments have often been found unsatisfactory in the past. In the time of Galileo it was argued that the texts, “And the sun stood still . . . and hasted not to go down about a whole day” (Joshua x. 13) and “He laid the foundations of the earth, that it should not move at any time” (Psalm cv. 5) were an adequate refutation of the Copernican theory. With our present knowledge such an argument appears futile. When that knowledge was not available it made a quite different impression.

Given a choice between the analytic powers of Turing and those of Lanier, I would pick Turing very nearly 100% of the time.

5. Clay Shirky and the latent cognitive surplus

Lanier’s treatment of Clay Shirky’s ideas is equally deplorable – sleight of hand again distorts the original message.  It starts off fine, with Lanier quoting an April 2008 article by Shirky:

And this is the other thing about the size of the cognitive surplus we’re talking about. It’s so large that even a small change could have huge ramifications. Let’s say that everything stays 99 percent the same, that people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing. The Internet-connected population watches roughly a trillion hours of TV a year. That’s about five times the size of the annual U.S. consumption. One per cent of that  is 100 Wikipedia projects per year worth of participation.

I think that’s going to be a big deal. Don’t you?

In Shirky’s view, there’s lots of time available for people to apply to creative tasks, if only they would spend less time watching sitcoms on TV.  Lanier pokes nauseasting fun at this suggestion, but only (page 49) by means of changing the time available into “seconds of salvaged” time.  (Who mentioned seconds?  Surely Shirky is talking about people applying themselves for longer than seconds at a time.)  Lanier labours his point with a ridiculous hyperbole:

How many seconds of salvaged erstwhile television time would need to be harnessed to replicate the achievements of, say, Albert Einstein?  It seems to me that even if we could network all the potential aliens in the galaxy – quadrillions of them, perhaps – and get each of them to contribute some seconds to a physics wiki, we would not replicate the achievements of even one mediocre physicist, much less a great one.

6. Friends and Facebook friends

Lanier really seems to believe (page 53) that people who use Facebook cannot distinguish between “Facebook friends” and “real world friends”.  He should talk more often to people who use Facebook, to see if they really are so “reduced” as he implies.

7. Lack of appreciation for security researchers

Lanier also rails (page 65) against people who investigate potential security vulnerabilities in software systems.

It seems he would prefer us all to live in ignorance about these potential vulnerabilities.

8. The Long Tail and individuals

Lanier cannot resist an ill-warranted attack on the notion of the long tail.  Describing a proposal of his own for how authors and artists could be rewarded for Internet usage of their material, Lanier makes the bizarre comment (page 101):

Note that this is a very different idea from the long tail, because it rewards individuals rather than cloud owners

Where did the assumption come from that writers who describe the Long Tail are only interested in rewarding “cloud owners” such as Amazon and Google?

9. All generations from Generation X onwards are somnolent

Lanier bemoans the blandness of the youth (page 128):

At the time that the web was born, in the early 1990s, a popular trope was that a new generation of teenagers, raised in the conservative Reagan years, had turned out exceptionally bland.  The members of “Generation X” were characterised as blank and inert.  The anthropologist Steve Barnett compared them to pattern exhaustion, a phenonemon in which a culture runs out of variations of traditional designs in their pottery and becomes less creative.

A common rationalisation is the fledgling world of digital culture back then is that we were entering a transitional lull before a creative storm – or were already in the eye of one.  But the sad truth is that we were not passing through a momentary lull before a storm.  We had instead entered a persistent somnolence, and I have come to believe that we will only escape it when we kill the hive.

My experience is at radical odds with this.  Through my encounters with year after year of graduate recruit intake at Symbian, I found many examples, each year, of youth full of passion, verve, and creativity.

The cloud which Lanier fears so much doesn’t stifle curiosity and creativity, but provides many means for people to develop a fuller human potential.

10. Open Source and creativity

Lanier complains that Open Source – and, more generally, Web 2.0 collaborative processes – has failed to produce anything of real value.  All it can do, he says (page 122 – and repeated numerous times elsewhere), is to imitate: Linux is a copy of UNIX and Wikipedia is a copy of Encyclopaedia Britannica.

But what about the UI creativity of Firefox (an open source web browser, that introduced new features ahead of the Microsoft alternative)?

How about the creativity of many of the applications on mobile devices, such as the iPhone, that demonstrate mashups of information from diverse sources (including location-based information).

Even to say that Wikipedia is derivative from Britannica misses the point, of course, that material in Wikipedia is updated so quickly.  Yes, there’s occasional unreliability, but people soon learn how to cross-check it.

It goes on…

For each point I’ve picked out above, there are many others I could have shared as well.

Lanier is speaking this evening (Monday 1st February) at London’s RSA.  The audience is usually respectful, but can ask searching questions.  This evening, if the lecture follows the same lines as the book, I expect to see more objections than usual.  However, I also expect there will be some in the audience who jump at the chance to defend humanity from the perceived incursions from computers and AI.

For a wider set of ojbections to Lanier’s ideas – generally expressed much more politely than my comments above – see this compendium from Edge.

My own bottom line view is that technology will significantly enhance human experience and creativity, rather than detract from it.

To be clear, I accept that there are good criticisms that can be made of the excesses of Web 2.0, open source, and so on.  For example, the second half of Nick Carr’s book “The Big Switch: Rewiring the World, from Edison to Google” is a good start.  (Andrew Orlowski produced an excellent review of Carr’s book, here.)  Lanier’s book is not a good contribution.

17 January 2010

Embracing engineering for the whole earth

Filed under: books, climate change, Genetic Engineering, geoengineering, green, Nuclear energy — David Wood @ 2:14 am

One thing I’m trying to do with my blog is to provide useful pointers, into the vast amount of material that’s available both online and offline, to the small small fraction of that material which does the best job of summarising, extending, and challenging current thinking.

Whole Earth Discipline: an ecopragmatist manifesto“, the recent book by veteran ecologist and environmentalist Stewart Brand, comprehensively fits that criterion.  It is so full of insight that virtually every page contains not just one but several blogworthy quotes, ideas, facts, putdowns, and/or refutations.  It’s that good.  I could write a book-length blogpost signing its praises.

Brand turned 70 while writing this book.  In the book, he indicates that he has changed his mind as he grew older.  The book serves as a landmark for various changes of mind for the environmental movement as a whole.  The argument is sustained, easy-to-read, detailed, and compelling.

The core argument is that the future well-being of the whole planet – human societies embedded in biological ecosystems – requires a thoroughgoing embrace of an engineering mindset.  Specifically, the environmental movement needs to recognise:

  • That the process of urbanisation – the growth of cities, even in apparently haphazard ways – provides good solutions to many worries about over-population;
  • That nuclear energy will play a large role in providing clean, safe, low-carbon energy;
  • That GE (genetic engineering) will play a large role in providing safe, healthy, nutritious food and medicine;
  • That the emerging field of synthetic biology can usefully and safely build upon what’s already being accomplished by GE;
  • That methods of geoengineering will almost certainly play a part in heading off the world’s pending climate change catastrophe.

The book has an objective and compassionate tone throughout.  At times it squarely accuses various environmentalists of severe mistakes – particularly in aspects of their opposition to GE and nuclear energy – mistakes that have had tragic consequences for developing societies around the world.  It’s hard to deny the charges.  I sincerely hope that the book will receive a wide readership, and will cause people to change their minds.

The book doesn’t just provide advocacy for some specific technologies.  More than that, it makes the case for changes in mindset:

  • It highlights major limitations to the old green mantra that “small is beautiful”;
  • It unpicks various romantic notions about the lifestyles and philosophies of native peoples (such as the American Indians);
  • It shows the deep weakness of the “precautionary principle”, and proposes an own alternative approach;
  • It emphasises how objections to people “playing God” are profoundly misguided.

Indeed, the book starts with the quote:

We are as gods and HAVE to get good at it.

It concludes with the following summary:

Ecological balance is too important for sentiment.  It requires science.

The health of the natural infrastructure is too compromised for passivity.  It requires engineering.

What we call natural and what we call human are inseparable.  We live one life.

And what is an engineer?  Brand states:

Romantics love problems; scientists discover and analyze problems; engineers solve problems.

As I read this book, I couldn’t help comparing it to “The constant economy” by Zac Goldsmith, which I read a few weeks ago.  The two books share many concerns about the unsustainable lifestyles presently being practiced around the world.  There are a few solutions in common, too.  But the wide distrust of technology shown by Goldsmith is amply parried by the material that Brand marshalls.  And the full set of solutions proposed by Brand are much more credible than those proposed by Goldsmith.  Goldsmith has been a major advisor to the UK Conservative Party on environmental matters.  If any UK party could convince me that they thoroughly understand, and intend to implement, the proposal in Brand’s book, I would be deeply impressed.

Note: an annotated reference companion to the book is available online, at www.sbnotes.com.  It bristles with useful links.  There’s also a 16 minute TED video, “Stewart Brand proclaims 4 environmental ‘heresies’“, which is well worth viewing.

Thanks to Marc Gunther, whose blogpost “Why Stewart Brand’s new book is a must-read” alerted me to this book.

By a fortunate coincidence, Brand will be speaking at the RSA in London on Tuesday.  I’m anticipating a good debate from the audience.  An audio feed from the meeting will be broadcast live.

13 January 2010

Top of the list: the biggest impact

Filed under: books, democracy, Humanity Plus, UKTA — David Wood @ 12:46 am

I recently published a list of the books that had made the biggest impact on me, personally, over the last ten years.  I left one book out of that list – the book that impacted me even more than any of the others.

The book in question was authored by Dr James J. Hughes, a sociologist and bioethicist who teaches Health Policy at Trinity College in Hartford, Connecticut.  In his spare time, James is the Executive Director of the IEET – the Institute for Ethics and Emerging Technologies.

The title of the book is a bit of a mouthful:

71uc-pryp2l

I came across this book in October 2005.  The ideas in the book started me down a long path of further exploration:

I count this book as deeply impactful for me because:

  1. It was the book that led to so many other things (as just listed);
  2. When I look back at the book in 2010, I find several key ideas in it which I now take for granted (but I had forgotten where I learned them).

An indication of the ideas contained may be found from an online copy of the Introduction to the book, and from a related essay “Democratic Transhumanism 2.0“.

The book goes far beyond just highlighting the potential of new technologies – including genetic engineering, nanotechnology, cognitive science, and artificial intelligence – to significantly enhance human experience.  The book also contains a savvy account of the role of politics in supporting and enabling human change.

To quote from the Introduction:

This book argues that transhuman technologies – technologies that push the boundaries of humanness – can radically improve our quality of life, and that we have a fundamental right to use them to control our bodies and minds.  But to ensure these benefits we need to democratically regulate these technologies and make them equally available in free societies.  Becoming more than human can improve all our lives, but only new forms of transhuman citizenship and democracy will make us free, more equal, and more united.

A lot of people are understandably frightened by the idea of a society in which unenhanced humans will need to coexist with humans who are smarter, faster, and more able, not to mention robots and enhanced animals…

The “bioLuddite” opposition to genetic engineering, nanotechnology, and artificial intelligence, slowly building and networking since the 1960s, picked up where the anti-industrialisation Luddites left off in the nineteenth century.  While Luddites believed that defending workers’ rights required a ban on the automation of work, the bioLuddites believe genetic engineering and human enhancement technologies cannot be used safely, and must be banned…

The emerging “biopolitical” polarisation between bioLuddites and transhumanists will define twenty-first century politics…

People will be happiest when they individually and collectively exercise rational control of the social and natural forces that affect their lives.  The promise of technological liberation, however, is best achieved in the context of a social democratic society committed to liberty, equality, and solidarity…

Boing Boing author Cory Doctorow makes some good points in his review of “Citizen Cyborg”:

I’ve just finished a review copy of James Hughes’s “Citizen Cyborg: Why Democratic Societies Must respond to the Redesigned Human of the Future.” I was skeptical when this one arrived, since I’ve read any number of utopian wanks on the future of humanity and the inevitable withering away of the state into utopian anarchism fueled by the triumph of superior technology over inferior laws.

But Hughes’s work is much subtler and more nuanced than that, and was genuinely surprising, engaging and engrossing…

Hughes’s remarkable achievement in “Citizen Cyborg” is the fusion of social democratic ideals of tempered, reasoned state intervention to promote equality of opportunity with the ideal of self-determination inherent in transhumanism. Transhumanism, Hughes convincingly argues, is the sequel to humanism, and to feminism, to the movements for racial and gender equality, for the fight for queer and transgender rights — if you support the right to determine what consenting adults can do with their bodies in the bedroom, why not in the operating theatre?

Much of this book is taken up with scathing rebuttal to the enemies of transhumanism — Christian lifestyle conservatives who’ve fought against abortion, stem-cell research and gay marriage; as well as deep ecologist/secular lefty intelligentsia who fear the commodification of human life. He dismisses the former as superstitious religious thugs who, a few generations back, would happily decry the “unnatural” sin of miscegenation; to the latter, he says, “You are willing to solve the problems of labor-automation with laws that ensure a fair shake for working people — why not afford the same chance to life-improving techno-medicine?”

The humanist transhuman is a political stance I’d never imagined, but having read “Citizen Cyborg,” it seems obvious and natural. Like a lot of basically lefty geeks, I’ve often felt like many of my ideals were at odds with both the traditional left and the largely right-wing libertarians. “Citizen Cyborg” squares the circle, suggest a middle-path between them that stands foursquare for the improvement of the human condition through technology but is likewise not squeamish about advocating for rules, laws and systems that extend a fair opportunity to those less fortunate…

The transformation of politics Hughes envisions is from a two-dimensional classification to a three-dimensional classification.

The first two dimensions are “Economic politics” and “Cultural politics”, with a spectrum (in each case) from conservative to progressive.

The new dimension, which will become increasingly significant, is “Biopolitics”.  Hughes uses the label “bioLuddism” for the conservative end of this spectrum, and “Transhumanism” for the progressive end.

The resulting cube has eight vertices, which include both “Left bioLuddism” and “Right bioLuddism”, as well as both “Libertarian transhumanism” and “Democratic transhumanism”.

Interestingly, the ramp-up of political debate in the United Kingdom, ahead of the parliamentary election that will take place some time before summer, has served as a reminder that the “old” political divisions seem inadequate to deal with the challenges of the current day.  It’s harder to discern significant real differences between the major parties.  I still don’t have any strong views as to which party I should vote for.  My guess is that each of the major parties will contain a split of views regarding the importance of enhancement technologies.

I’ll give the final words to James Hughes – from the start of Chapter 7 in his book:

The most important disagreement between bioLuddites and transhumanists is over who we should grant citizenship, with all its rights and protections.  BioLuddites advocate “human-racism”, that citizenship and rights have something to do with simply having a human genome.  Transhumanists… believe citizenship should be based on “personhood”, having feelings and consciousness.  The struggle to replace human-racism with personhood can be found at the beginnings and ends of life, and at the imaginary lines between humans and animals, and between humans and posthumans.  Because they have not adopted the personhood view, the human-racists are disturbed by lives that straddle the imaginary human/non-human line.  But technological advances at each of these margins will force our society in the coming decades to complete the trajectory of 400 years of liberal democracy and choose “cyborg citizenship”.

10 January 2010

The most impactful books of the last decade

Filed under: books — David Wood @ 2:08 am

My love affair with books and book reviews grows out of my admiration for the collective knowledge, insight, and wisdom that human civilsation is accumulating.  Knowledge grows through the process of books being written, reviewed, criticised, and (where appropriate) treasured.

Over the last ten years, many books have struck me in various ways, causing me to revise key parts of my own worldview, or to gain valuable new ways of making sense of trends in business, technology, society, and culture.  In short, these books have made me wiser.

In this posting, to mark the start of a new decade, I list the books I remember as making the biggest impact on me personally, over the last ten years.  They’re all books that stand the test of time, with continued relevance.

I’m saving the single most impactful book for a later posting – it’s excluded from the list below.  I’ve arranged the remaining books below in alphabetical order by title.

All but two of the books are non-fiction.

Breaking Windows: How Bill Gates Fumbled the Future of Microsoft – by David Bank

This book is probably as relevant today – when people are pondering the seeming declining influence of Microsoft – as it was in the days near the start of the last decade, when I first read it.  I remember a sense of wonder at how Microsoft placed so much emphasis on one principle – emphasising the Windows APIs and brand presence – at the cost of significantly hurting other products which Microsoft could have developed.  That’s the sense in which, according to the author, the future of Microsoft was “fumbled”.

Microsoft’s relatively poor performance in the smartphone OS world can, arguably, be traced to decisions that are documented in this book.

Darwin’s Cathedral: Evolution, Religion, and the Nature of Society – by David Sloan Wilson

This book has sweeping scope, but makes its case very well.  The case is that religion has in general survived inasmuch as it helped groups of people to achieve greater cohesion and thereby acquire greater fitness compared to other groups of people.  This kind of religion has practical effect, independent of whether or not its belief system corresponds to factual reality.  (It can hardly be denied that, in most cases, the belief system does not correspond to factual reality.)

The book has some great examples – from the religions in hunter-gatherer societies, which contain a powerful emphasis on sharing out scarce resources completely equitably, through examples of religions in more complex societies.  The chapter on John Calvin was eye-opening (describing how his belief system brought stability and prosperity to Geneva) – as were the sections on the comparative evolutionary successes of Judaism and early Christianity.  But perhaps the section on the Balinese water-irrigation religion is the most fascinating of the lot.

Of course, there are some other theories for why religion exists (and is so widespread), and this book gives credit to these theories in appropriate places.  However, this pro-group selection explanation has never before been set out so carefully and credibly, and I think it’s no longer possible to deny that it plays a key role.

The discussion makes it crystal clear why many religious groups tend to treat outsiders so badly (despite treating insiders so well).  It also provides a fascinating perspective on the whole topic of “forgiveness”.  Finally, the central theme of “group selection” is given a convincing defence.

Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime – by Aubrey de Grey and Michael Rae

Does it make technological and political sense that there are people alive today who could live as long as one thousand years?  Or are any such ideas just naive and irresponsible fantasy?

This book probably provides the most serious positive answer to this question.  In part, it comes across as a polemic, but the large majority is a set of suggestive ideas.

There is no definite prescription in this book for how to “end aging”, but it contains many details of a program which, if given more research priority, could well come up with such a prescription, within a small number of decades.

Some of the technical suggestions can be criticised, but no doubt the program, if it gets properly underway, will generate additional ideas and workarounds.

As ideas go, this is about as big as it gets.

First, Break All the Rules: What the World’s Greatest Managers Do Differently – by Marcus Buckingham and Curt Coffman

Two things still stand out in my mind about this book, many years after I first read it:

  1. The sad but oh-so-often true remark that “people join companies but leave managers”, meaning that bad managers frequently cause good employees to want to leave the company;
  2. The famous list of the “G12” or “Gallup 12 questions” that can be asked of employees, on a regular basis, to give a reliable indication of the degree of engagement and enthusiasm of the employees.

The G12 questions don’t cover things like salary or stock options, but instead address “Do I know what’s expected of me at work” and “Do I have all the equipment and materials I need to do my work right”, etc.  In short, to be a great manager, you need to “break a few rules” of conventional wisdom, and focus instead on getting your employees to be able to give positive answers to more and more of these G12 questions.  And it’s all backed up by lots of Gallup research.  As such, this book is perhaps the single best guide on what managers should be doing, to improve the working environment of their employees.

Good to Great: Why Some Companies Make the Leap… and Others Don’t – by Jim Collins

This book distills a number of very important principles that are likely to make a big difference to the long-term growth of a company.  Each chapter is a gem.

For example, I still vividly remember the principles of:

  • A “level 5” CEO, who combines “personal humility and professional will”;
  • Get the right people on the bus (and the wrong people off the bus);
  • The culture of realistic optimism: “confront the brutal facts”;
  • Find the hedgehog principle for your company – simplicity within three circles.

With the passage of time since the book was written, some of the companies described in it as “great” have suffered reverses of fortune.  However, this doesn’t diminish the value of the advice.

Heat: How to Stop the Planet from Burning – by George Monbiot

This was the book which helped move me from “concerned about climate change” to “deeply concerned about climate change”.  It combines passionate urgency with an incisive critical evaluation of numerous options.

The author demonstrates lots of fury but also lots of serious ideas.  It’s hard to remain unmoved while reading this.

How Markets Fail: The Logic of Economic Calamities – by John Cassidy

Free markets have been a tremendous force for progress.  However, they need oversight and regulation.  Lack of appreciation of this point is the fundamental cause of the Great Crunch that the world financial systems recently experienced.  That’s the essential message of this important book.

I call this book “important” because it contains a sweeping but compelling survey of a notion Cassidy dubs “Utopian economics”, before providing layer after layer of decisive critique of that notion.  As such, the book provides a very useful guide to the history of economic thinking, covering Adam Smith, Friedrich Hayek, Milton Friedman, John Maynard Keynes, Arthur Pigou, Hyman Minsky, and many, many others.

The key theme in the book is that markets do fail from time to time, potentially in disastrous ways, and that some element of government oversight and intervention is both critical and necessary, to avoid calamity.  This theme is hardly new, but many people resist it, and the book has the merit of marshalling the arguments more comprehensively than I have seen elsewhere.  See my review here.

Influencer: The Power to Change Anything – by Kerry Patterson et al

This book starts by noting that we are, in effect, too often resigned to a state of helplessness, as covered by the “acceptance clause” of the so-called “serenity prayer” of Reinhold Niebuhr:

God grant me the serenity
To accept the things I cannot change;
Courage to change the things I can;
And wisdom to know the difference

What we lack, the book says, is the skillset to be able to change more things.  It’s not a matter of exhorting people to “try harder”.  Nor is a matter that we need to become better in talking to people, to convince them of the need to change.  Instead, we need a better framework for how influence can be successful.

Part of the framework is to take the time to learn about the “handful of high-leverage behaviors” that, if changed, would have the biggest impact.  This is a matter of focusing – leaving out many possibilities in order to target behaviours with the greatest leverage.  Another part of the framework initially seems the opposite: it recommends that we prepare to use a large array of different influence methods (all with the same intended result).  These influence methods start by recognising the realities of human reasoning, and works with these realities, rather than seeking to drastically re-write them.

The framework describes six sources of influence, in a 2×3 matrix.  One set of three sources addresses motivation, and the other set of three addresses capability.  In each case, there are personal, social, and structural approaches (hence the 2×3).  The book has a separate chapter for each of these six sources.  Each chapter is full of good material.

As I worked through chapter after chapter, I kept thinking “Aha…” to myself.  The material is backed up by extensive academic research by change specialists such as Albert Bandura and Brian Wansink.  There are also numerous references to successful real-life influence programs, such as the eradication of guinea worm diseasee in sub-saharan Africa, controlling AIDS in Thailand, and the work of Mimi Silbert of Delancy Street with “substance abusers, ex-convicts, homeless and others who have hit bottom”.

Leading Change – by John Kotter

This is the definitive account of why so many change initiatives in organisations go astray – and what we can do to avoid repeating the same mistakes.

As Kotter describes, the eight reasons why change initiatives fail are:

  1. Lack of a sufficient sense of urgency;
  2. Lack of an effective guiding coalition for the change (an aligned team with the ability to make things happen);
  3. Lack of a clear appealing vision of the outcome of the change (otherwise it may seem too vague, having too many unanswered questions);
  4. Lack of communication for buy-in, keeping the change in people’s mind (otherwise people will be distracted back to other issues);
  5. Lack of empowerment of the people who can implement the change (lack of skills, wrong organisational structure, wrong incentives, cumbersome bureaucracy);
  6. Lack of celebration of small early wins (failure to establish momentum);
  7. Lack of follow through (it may need wave after wave of change to stick);
  8. Lack of embedding the change at the cultural level (otherwise the next round of management changes can unravel the progress made).

See my review here for more details.

Leading Lean Software Development: Results Are not the Point – by Mary and Tom Poppendieck

The Poppendiecks have co-authored three pioneering books on Lean Software Development, describing the application of “lean” manufacturing thinking (pioneered at Toyota in Japan) to software development.  Each of the books has been full of practical insight, that often initially strikes the reader as counter-intuitive, before the bigger picture sets in.  And the bigger picture is what’s important.  Here’s an example principle:

“The biggest cause of failure in software-intensive systems is not technical failure; it’s building the wrong thing” – Mary Poppendieck

Mary and Tom travel the world consulting and presenting on their ideas, and each new book benefits from two or three extra years of the ideas being reviewed, elaborated, and refined.  The one I’ve picked for this collection is the most recently published.

Made to Stick: Why Some Ideas Survive and Others Die – by Chip Heath and Dan Heath

This may be the best book on communications and presentations that I have ever read.

It’s full of compelling explanations about how to ensure that your messages are thoroughly memorable.  Messages should be:

  • Simple,
  • Unexpected,
  • Concrete,
  • Credible,
  • Emotional,
  • Stories.

Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies – and What It Means to Be Human – by Joel Garreau

This is probably one of the best books written about transhumanist themes.  It makes a fine job of introducing numerous personalities and ideas.

The book is arranged in three parts:

  • “Heaven” – mainly concentrating on the potential upside of the radical application of technology to humans
  • “Hell” – analysing the potential downside of radical new technologies
  • “Prevail” – mapping out a plausible (and exciting) “middle of the road” scenario.

Scaling Software Agility: Best Practices for Large Enterprises – by Dean Leffingwell

There’s been lots of talk about how to make Agile practices work in larger software development projects.  This book is the best account I’ve seen of how to make this work in practice.

The material is very clear, covering:

  • A summary of Agile principles, viewed from the advantage of passage of time since they were first introduced;
  • A description of what happens to these principles, when they need to be applied in larger projects;
  • A series of additional methods, which can be introduced to support the application of agile principles in larger projects.

Many of the development practices inside Symbian Software Ltd were influenced by this book.

Schrödinger’s Rabbits: The Many Worlds of Quantum – by Colin Bruce

This book covers the same subject as I was researching for my doctoral studies in the philosophy of science during the 1980’s.  Reading the book a few years back, it reinforced my views that:

  • The question of the interpretation of quantum mechanics remains controversial and difficult, being beset with problems, more than 80 years after the subject was introduced;
  • The so-called “many worlds” interpretation of quantum mechanics has (despite strong first impressions to the contrary) the best potential to solve these problems;
  • Nevertheless, the “many worlds” interpretation still faces many issues and challenges of its own.

One day I may return to this subject 🙂

For more details, see here.

The Age of Spiritual Machines: When Computers Exceed Human Intelligence – by Ray Kurzweil

This is an intellectual romp, through the 21st century, decade by decade, looking at the likely evolution of the interactions between computers and humans. There’s some mind-boggling ideas (in both senses of the phrase “mind-boggling”, since what may well happen to the human mind in the future is, well, mind-boggling).

Kurzweil wrote this book several years before his later book, “The singularity is near”.  Of the two, I much prefer “The age of spiritual machines”.

The End of Faith: Religion, Terror, and the Future of Reason – by Sam Harris

Over the last few years, I’ve read a string of books by the so-called “new atheists”, and learned from each of them.

The audio recording of “Letting go of God” by Julia Sweeny is probably the most touching and personal of them all.  It’s both funny and moving.  However, in terms of seriousness of purpose, I pick the Sam Harris book as being particularly significant.

It’s not just a book with reasons to be intellectually distrustful of religious faith.  It’s a book about why dreadful actions arising from religious faith (such as the “terror” mentioned in the title) are likely to continue happening – and can be expected to become even worse, in an age where “weapons of mass destruction” abound.  The book makes a strong case that faith, itself, is the most dangerous element of modern life.

The First Immortal: A Novel Of The Future – by James Halperin

The writing in this book is sometimes a bit laboured, but the central ideas are extremely well worked out.

It looks at cryonics and revival – very low temperature preservation of the human body until such time in the future as the diseases that were killing the body can be cured. It looks at these themes from numerous angles, through the different characters in the book, and their changing attitudes.

Some of the ideas and episodes are very vivid indeed, and remain clearly in my mind now, quite a few years after I read the book.

I understand that the author conducted thorough research into the technology of cryonics, in order to make the account scientifically credible. The effort has paid off – this is a plausible (though mind-jarring) account.  It made me take cryonics much more seriously.

The Future of Management – by Gary Hamel

This is perhaps the best book I’ve read on innovation – and the best book I’ve read on desirable management culture.

I’ll cast my vote any day for the kind of pro-innovation pro-enablement management culture Hamel describes. It’s the approach that has great potential to motivate key employees.

It includes chapters on the remarkable management cultures at Whole Foods Market, W.L. Gore (makers of Gore-Tex etc), and a “small little upstart” called Google.

Here’s a quote from near the start of the book: “To thrive in an increasingly disruptive world, companies must become as strategically adaptable as they are operationally efficient”.

And here’s one from around 20% of the way in: “if you want to capture the economic high ground in the creative economy, you need employees who are more than acquiescent, attentive, and astute – they must also be zestful, zany, and zealous. So we must ask: what are the obstacles that stand in the way of achieving this state of organisational bliss?”

The rest of the book provides answers to this question.  It highlights the very large difference to a company’s success that can be made by the management culture that is in place (note: what matters is the management culture as it is actually practised, rather than what is espoused).

The Goal – by Eliyahu Goldratt

This is a novel written to illustrate the ideas in the “theory of constraints”.

As fiction, the story has its touching moments.  As an introduction to a set of powerful ideas on identifying and dealing with bottlenecks, it’s a huge wake-up call.  It shows that a large amount of effort in improving systems is actually wasted.

The Hacker Ethic – by Pekka Himanen

This book provides an illuminating contrast between the so-called “Protestant work ethic” and the emerging “Hacker ethic” which is increasingly widely followed nowadays.

If the former is characterised by the seven values

  • Money, Work, Optimality, Flexibility, Stability, Determinacy, and Result accountability,

the latter is characterised by this seven:

  • Passion, Freedom, Social worth, Openness, Activity, Caring, Creativity

This book provided my first big push towards the methods of open source development.

The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom – by Jonathan Haidt

The stated purpose of the book is to consider “ten great ideas” about morality and ethics, drawn from Eastern and Western religious and philosophical traditions, and to review these ideas in the light of the latest scientific findings about the human condition. Initially, I was sceptical about how useful such an exercise might be. But the book quickly led me to set aside my scepticism. The result is greater than the sum of the ten individual reviews, since the different ideas overlap and reinforce.

Haidt declares himself to be both an atheist and a liberal, but with a lot of sympathy for what both theists and conservatives try to hold dear. In my view, he does a grand job of bridging these tough divides.

Haidt seems deeply familiar with a wide number of diverse traditional thinking systems, from both East and West. He also shows himself to be well versed in many modern (including very recent) works on psychology, sociology, and evolutionary theory. The synthesis is frequently remarkable. I found myself re-thinking lots of my own worldwide.

See here for my fuller review of this book.

The Knowing-Doing Gap: How Smart Companies Turn Knowledge into Action – by Jeffrey Pfeffer and Robert Sutton

This book does a demolition job on the “smart talk” culture that often prevails in companies which fail to act on the knowledge they already possess.

The book addresses the question:

“Why is it that, at the end of so many books and seminars, leaders report being enlightened and wiser, but not much happens in their organizations?”

As I reported in a previous posting, my takeaway from the book was the following set of five characteristics of companies that can successfully bridge this vicious “Knowing Doing Gap”:

  1. They have leaders with a profound hands-on knowledge of the work domain;
  2. They have a bias for plain language and simple concepts;
  3. They encourage solutions rather than inaction, by framing questions asking “how”, not just “why”;
  4. They have strong mechanisms that close the loop – ensuring that actions are completed (rather than being forgotten, or excuses being accepted);
  5. They are not afraid to “learn by doing”, and thereby avoid analysis paralysis.

The Power of Full Engagement: Managing Energy, Not Time, Is the Key to High Performance and Personal Renewal – by Jim Loehr and Tony Schwartz

This book came as a surprise to me, but its message was (on reflection) undeniable.  Namely, rather than worry much about time management, it’s energy management that should attract our big attention.

The book contains some heart-stopping stories, which should resonate strongly with many “busy people”.

The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture – by John Battelle

The book gave me, not just a deeper respect for many aspects of Google, but an appreciation of how online Search is going to have incredibly far-reaching implications.  Search is truly a large topic.

The book contains numerous fascinating anecdotes about the founders of Google, as well as other key silicon valley figures.  But the fullest value in the book is in the analysis it provides.

The Slow Pace of Fast Change: Bringing Innovations to Market in a Connected World – by Bhaskar Chakravorti

This is the book that introduced the whimsically-named but important notion of “demi Moore’s Law” – the idea that products which are parts of inter-connected networks often progress at a pace roughly half of what would be expected, based solely on hardware considerations – such as the rate of improvement described by [Gordon] Moore’s Law.

The reason for the discrepancy is that a product can only be accepted (and them embellished) once a series of related changes are made in associated products.  For example, improvements to mobile handsets were historically linked to improvements in mobile networks, and/or improvements in mobile applications.

However, as the book explains, the impact of these inter-connections isn’t always to slow change down.  Once the previous network ecosystem has been dismantledand a new one established in its place (which can take a long time), product development within the new ecosystem can often proceed even faster than would be predicted by Moore’s Law alone.

The Starfish and the Spider: The Unstoppable Power of Leaderless Organizations – by Ori Brafman and Rod Beckstrom

This book is a sober but enlightening account of the issues of centralisation (“spider”) vs. decentralisation (“starfish”), as well as suitable mixtures of the two.

The book also shows why there’s a great deal at stake behind this contrast: issues of commercial revenues, the rise and fall of businesses, and the rise and fall of change movements within society – where the change movements include such humdingers as Slave Emancipation, Sex Equality, Animal Liberation, and Al Qaeda.

There are many stories running through the book, chosen both from history and from contemporary events.  The stories are frequently picked up again from chapter to chapter, with key new insights being drawn out.  Some of the stories are familiar and others are not.  But the starfish/spider framework casts new light on them all.

Each chapter brought an important additional point to the analysis.  For example: factors allowing de-centralised organisations to flourish; how centralised organisations can go about combatting de-centralised opponents; issues about combining aspects of both approaches.  (The book argues that smart de-centralisation moves by both GE and Toyota are responsible for significant commercial successes in these companies.)

The book also spoke personally to me.  As it explains, starfish organisations depend upon so-called “catalyst” figures, who lack formal authority, and who are prepared to move into the background without clinging to power.  There’s a big difference between catalysts and CEOs.  Think “Mary Poppins” rather than “Maria from Sound of Music”.  That gave me a handy new way of thinking about my own role in organisations.  (I’m like Mary Poppins, rather than Maria!)

The Success of Open Source – by Steven Weber

This book provides a thorough and wide-ranging analysis of the circumstances in which open source software methods succeed.

It goes far beyond technical considerations, and also looks at motivational, economic, and governance issues.  It shows that the sucess of open source is no “accident” or “mystery”.

The Upside of Down: Catastrophe, Creativity, and the Renewal of Civilization – by Thomas Homer-Dixon

A sweeping and convincing summary of the very pressing amalgam of deep problems facing the future of civilisation.

The basic themes are that:

  • The major problems facing us are intertwined, and the interconnections make things worse;
  • Things are unlikely to become better before they first become worse (and shake us into action in the process).

The Wisdom of Crowds – by James Surowiecki

On many occasions, crowds are bad for rationality.  A kind of lowest demoninator outcome results.  This is the “madness of crowds”.

But on other occasions, a crowd can end up being, collectively, significantly smarter than even the smartest individuals inside that crowd.

What’s the difference between the two sets of occasions?

This book explains, and in so doing, provides the intellectual grounding for lots of contemporary ideas about the benefits of openness and community.

Will & Vision: How Latecomers Grow to Dominate Markets – by Gerard Tellis and Peter Golder

A lot of thinking about business strategy is dominated by the idea of “first mover’s advantage”, which holds that the first company into a product category has a very significant advantage over challengers.

This book by Tellis and Golder contains a devastating refutation of that idea – both from an empirical (example-based) point of view and a theoretical analysis.

As such, it gives plenty of encouragement to potential new market entrants, and poses grounds for anxiety for current market leaders.

World on Fire: How Exporting Free Market Democracy Breeds Ethnic Hatred and Global Instability – by Amy Chua

This book is a shocking but compelling run-through of many examples from around the world where the application of free market ideas, in parallel with the introduction of democracy, often backfires: commercially successful minorities become the victims of fierce discrimination and violent reprisals.

As such, the book is a necessary antidote to any overly naive idea that successful methods from mature market democracies can simply be transplanted, overnight (as it were), to parts of the world with very different background cultures.

9 January 2010

Progress with AI

Filed under: AGI, books, m2020, Moore's Law, UKH+ — David Wood @ 9:47 am

Not everyone shares my view that AI is going to become a more and more important field during the coming decade.

I’ve received a wide mix of feedback in response to:

  • and my comments made in other discussion forums about the growth of AI.

Below, I list some of the questions people have raised – along with my answers.

Note: my answers below are informed by (among other sources) the 2007 book “Beyond AI: creating the conscience of the machine“, by J Storrs Hall, that I’ve just finished reading.

Q1: Doesn’t significant progress with AI presuppose the indefinite continuation of Moore’s Law, which is suspect?

There are three parts to my answer.

First, Moore’s Law for exponential improvements in individual hardware capability seems likely to hold for at least another five years, and there are many ideas for new semiconductor innovations that would extend the trend considerably further.  There’s a good graph of improvements in supercomputer power stretching back to 1960 on Shane Legg’s website, along with associated discussion.

Dylan McGrath, writing in EE Times in June 2009, reported views from iSuppli Corp that “Equipment cost [will] hinder Moore’s Law in 2014“:

Moore’s Law will cease to drive semiconductor manufacturing after 2014, when the high cost of chip manufacturing equipment will make it economically unfeasible to do volume production of devices with feature sizes smaller than 18nm, according to iSuppli Corp.

While further advances in shrinking process geometries can be achieved after the 20- to 18nm nodes, the rising cost of chip making equipment will relegate Moore’s Law to the laboratory and alter the fundamental economics of the semiconductor industry, iSuppli predicted.

“At those nodes, the industry will start getting to the point where semiconductor manufacturing tools are too expensive to depreciate with volume production, i.e., their costs will be so high, that the value of their lifetime productivity can never justify it,” said Len Jelinek, director and chief semiconductor manufacturing iSuppli, in a statement.

In other words, it remains technological possible that semiconductors can become exponentially denser even after 2014, but it is unclear that sufficient economic incentives will exist for these additional improvements.

As The Register reported the same story:

Basically, just because chip makers can keep adding cores, it doesn’t mean that the application software and the end user workloads that run on this iron will be able to take advantage of these cores (and their varied counts of processor threads) because of the difficulty of parallelising software.

iSuppli is not talking about these problems, at least not today. But what the analysts at the chip watcher are pondering is the cost of each successive chip-making technology and the desire of chip makers not to go broke just to prove Moore’s Law right.

“The usable limit for semiconductor process technology will be reached when chip process geometries shrink to be smaller than 20 nanometers (nm), to 18nm nodes,” explains Len Jelinek…

At that point, says Jelinek, Moore’s Law becomes academic, and chip makers are going to extend the time they keep their process technologies in the field so they can recoup their substantial investments in process research and semiconductor manufacturing equipment.

However, other analysts took a dim view of this pessimistic forecast, and maintain that Moore’s Law will be longer lived.  For example, In-Stat’s chief technology strategist, Jim McGregor, offered the following rebuttal:

…every new technology goes over some road-bumps, especially involving start-up costs, but these tend to drop rapidly once moved into regular production. “EUV [extreme ultraviolet] will likely be the next significant technology to go through this cycle,” McGregor told us.

McGregor did concede that the lifecycle of certain technologies is being extended by firms who are in some cases choosing not to migrate to every new process node, but he maintained new process tech is still the key driver of small design geometries, including memory density, logic density, power consumption, etc.

“Moore’s Law also improves the cost per device and per wafer,” added McGregor, who also noted that “the industry has and will continue to go through changes because of some of the cost issues.” These include the formation of process development alliances, like IBM’s alliances, the transition to foundry manufacturing, and design for manufacturing techniques like computational lithography.

“Many people have predicted the end of Moore’s Law and they have all been wrong,” sighed McGregor. The same apparently goes for those foolhardy enough to attempt to predict changes in the dynamics of the semiconductor industry.

“There have always been challenges to the semiconductor technology roadmap, but for every obstacle, the industry has developed a solution and that will continue as long as we are talking about the hundreds of billion of dollars in revenue that are generated every year,” he concluded.

In other words, it is likely that, given sufficient economic motivation, individual hardware performance will continue improving, at a significant rate (if, perhaps, not exponentially) throughout the coming decade.

Second, it remains an open question as to how much hardware would be needed, to host an Artificial (Machine) Intelligence (“AI”) that has either human-level or hyperhuman reasoning power.

Marvin Minsky, one of the doyens of AI research, has been quoted as believing that computers commonly available in universities and industry already have sufficient power to manifest human-level AI – if only we could work out how to program them in the right way.

J. Storr Hall provides an explanation:

Let me, somewhat presumptuously, attempt to explain Minsky’s intuition by an analogy: a bird is our natural example of the possibility of heavier-than-air flight. Birds are immensely complex: muscles, bones, feathers, nervous systems. But we can build working airplanes with tremendously fewer moving parts. Similarly, the brain can be greatly simplified, still leaving an engine capable of general conscious thought.

Personally, I’m a big fan of the view that the right algorithm can make a tremendous difference to a computational task.  As I noted in a 2008 blog post:

Arguably the biggest unknown in the technology involved in superhuman intelligence is software. Merely improving the hardware doesn’t necessarily mean the the software performance increases to match. As has been remarked, “software gets slower, more rapidly than hardware gets faster”. (This is sometimes called “Wirth’s Law”.) If your algorithms scale badly, fixing the hardware will just delay the point where your algorithms fail.

So it’s not just the hardware that matters – it’s how that hardware is organised. After all, the brains of Neanderthals were larger than those of humans, but are thought to have been wired up differently to ours. Brain size itself doesn’t necessarily imply intelligence.

But just because software is an unknown, it doesn’t mean that hardware-driven predictions of the onset of the singularity are bound to be over-optimistic. It’s also possible they could be over-pessimistic. It’s even possible that, with the right breakthroughs in software, superhuman intelligence could be supported by present-day hardware. AI researcher Eliezer Yudkowsky of the Singularity Institute reports the result of an interesting calculation made by Geordie Rose, the CTO of D-Wave Systems, concerning software versus hardware progress:

“Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or an 1977 computer, the Apple II, running a 2007 algorithm? Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years…

“[For exploring new AI breakthroughs] I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970’s theory and a Blue Gene.”

Here’s a related example.  When we think of powerful chess-playing computers, we sometimes think that massive hardware resources will be required, such as a supercomputer provides.  However, as long ago as 1985, Psion, the UK-based company I used to work for (though not at that time), produced a piece of software that played what many people thought, at the time (and subsequently) to be a very impressive quality of chess.  See here for some discussion and some reviews.  Taking things even further, this article from 1983 describes an implementation of chess, for the Sinclair ZX-81, in only 672 bytes – which is hard to believe!  (Thanks to Mark Jacobs for this link.)

Third, building on this point, progress in AI can be described as a combination of multiple factors:

  1. Individual hardware power
  2. Compound hardware power (when many different computers are linked together, as on a network)
  3. Software algorithms
  4. Number of developers and researchers who are applying themselves to the problem
  5. The ability to take advantage of previous results (“to stand on the shoulders of giants”).

Even if the pace slows for improvements in the hardware of individual computers, it’s still very feasible for improvements in AI to take place, on account of the other factors.

Q2: Hasn’t rapid progress with AI often been foretold before, but with disappointing outcomes each time?

It’s true that some of the initial forecasts of the early AI research community, from the 1950’s, have turned out to be significantly over-optimistic.

For example, in his famous 1950 paper “Computing machinery and intelligence” – which set out the idea of the test later known as the “Turing test” – Alan Turing made the following prediction:

I believe that in about fifty years’ time it will be possible, to programme computers… to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification [between a computer answering, or a human answering] after five minutes of questioning.

Since the publication of that paper, some sixty years have now passed, and computers are still far from being able to consistently provide an interface comparable (in richness, subtlety, and common sense) to that of a human.

For a markedly more optimistic prediction, consider the proposal for the 1956 Dartmouth Summer Research Conference on Artificial Intelligence which is now seen, in retrospect, as the the seminal event for AI as a field.  Attendees at the conference included Marvin Minsky, John McCarthy, Ray Solomonoff, and Claude Shannon.  The group came together with the following vision:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

The question for us today is: what reason is there to expect rapid progress with AI in (say) the next ten years, given that similar expectations in the past failed – and, indeed, the whole field eventually fell into what is known as an “AI winter“?

J Storrs Hall has some good answers to this question.  They include the following:

First, AI researchers in the 1950’s and 60’s laboured under a grossly over-simplified view of the complexity of the human mind.  This can be seen, for example, from another quote from Turing’s 1950 paper:

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer’s. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.

Progress in brain sciences in the intervening years has highlighted very significant innate structure in the child brain.  A child brain is far from being a blank notebook.

Second, early researchers were swept along on a wave of optimism from some apparent early successes.  For example, consider the “ELIZA” application that mimicked the responses of a certain school of psychotherapist, by following a series of simple pattern-matching rules.  Lay people who interacted with this program frequently reported positive experiences, and assumed that the computer really was understanding their issues.  Although the AI researchers knew better, at least some of them may have believed that this effect showed that more significant results were just around the corner.

Third, the willingness of funding authorities to continue supporting general AI research became stretched, due to the delays in producing stronger results, and due to other options for how that research funds should be allocated.  For example, the Lighthill Report (produced in the UK in 1973 by Professor James Lighthill – whose lectures in Applied Mathematics at Cambridge I enjoyed many years later) gave a damning assessment:

The report criticized the utter failure of AI to achieve its “grandiose objectives.” It concluded that nothing being done in AI couldn’t be done in other sciences. It specifically mentioned the problem of “combinatorial explosion” or “intractability”, which implied that many of AI’s most successful algorithms would grind to a halt on real world problems and were only suitable for solving “toy” versions…

The report led to the dismantling of AI research in Britain. AI research continued in only a few top universities (Edinburgh, Essex and Sussex). This “created a bow-wave effect that led to funding cuts across Europe”

There were similar changes in funding climate in the US, with changes of opinion within DARPA.

Shortly afterwards, the growth of the PC and general IT market provided attractive alternative career targets for many of the bright researchers who might previously have considered devoting themselves to AI research.

To summarise, the field suffered an understandable backlash against its over-inflated early optimism and exaggerated hype.

Nevertheless, there are grounds for believing that considerable progress has taken place over the years.  The middle chapters of the book by J Storrs Hall provides the evidence.  The Wikipedia article on “AI winter” covers (much more briefly) some of the same material:

In the late ’90s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes. Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.” Rodney Brooks adds “there’s this stupid myth out there that AI has failed, but AI is around you every second of the day.”

Technologies developed by AI researchers have achieved commercial success in a number of domains, such as machine translation, data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosis and Google’s search engine…

Many of these domains represent aspects of “narrow” AI rather than “General” AI (sometime called “AGI”).  However, they can all contribute to overall progress, with results in one field being available for use and recombination in other fields.  That’s an example of point 5 in my previous list of the different factors affecting progress in AI:

  1. Individual hardware power
  2. Compound hardware power (when many different computers are linked together, as on a network)
  3. Software algorithms
  4. Number of developers and researchers who are applying themselves to the problem
  5. The ability to take advantage of previous results (“to stand on the shoulders of giants”).

On that note, let’s turn to the fourth factor in that list.

Q3: Isn’t AI now seen as a relatively uninteresting field, with few incentives for people to enter it?

The question is: what’s going to cause bright researchers to devote sufficient time and energy to progressing AI – given that there are so many other interesting and rewarding fields of study?

Part of the answer is to point out that the potential number of people working in this field is, today, larger than ever before – simply due to the rapid increase in the number of IT-literate graduates around the world.  Globally, there are greater numbers of science and engineering graduates from universities (including China and India) than ever before.

Second, here are some particular pressing challenges and commercial opportunities, which make it likely that further research will actually take place on AI:

  • The “arms race” between spam detection systems (the parts of forms that essentially say, “prove you are a human, not a bot”) and ever-cleverer spam detection evasive systems;
  • The need for games to provide ever more realistic “AI” features for the virtual characters in these games (games players and games writers unabashedly talk about the “AI” elements in these games);
  • The opportunity for social networking sites to provide increasingly realistic virtual companions for users to interact with (including immersive social networking sites like “Second Life”);
  • The constant need to improve the user experience of interacting with complex software; arguably the complex UI is the single biggest problem area, today, facing many mobile applications;
  • The constant need to improve the interface to large search databases, so that users can more quickly find material.

Since there is big money to be made from progressing solutions in each of these areas, we can assume that companies will be making some significant investments in the associated technology.

There’s also the prospect of a “tipping point” once some initial results demonstrate the breakthrough nature of some aspects of this field.  As J Storrs Hall puts it (in the “When” chapter of his book):

Once a baby [artificial] brain does advance far enough that it has clearly surpassed the bootstrap fallacy point… it might affect AI like the Wright brothers’ [1908] Paris demonstrations of their flying machines did a century ago.  After ignoring their successful first flight for years, the scientific community finally acknowleged it.  Aviation went from a screwball hobby to the rage of the age and kept that cachet for decades.  In particular, the amount of development took off enormously.  If we can expect a faint echo of that from AI, the early, primitive general learning systems will focus research considerably and will attract a lot of new resources.

Not only are there greater numbers of people potentially working on AI now, than ever before; they each have much more powerful hardware resources available to them.  Experiments with novel algorithms that previously would have tied up expensive and scarce supercomputers can nowadays be done on inexpensive hardware that is widely available.  (And once interesting results are demonstrated on low-powered hardware, there will be increased priority of access for variants of these same ideas to be run on today’s supercomputers.)

What’s more, the feedback mechanisms of general internet connectivity (sharing of results and ideas) and open source computing (sharing of algorithms and other source code) mean that each such researcher can draw upon greater resources than before, and participate in stronger collaborative projects.  For example, people can choose to participate in the “OpenCog” open source AI project.]

Appendix: Further comments on the book “Beyond AI”

As well as making a case that progress in AI has been significant, another of the main theme of J Storrs Hall’s book “Beyond AI: Creating the conscience of the machine” is the question of whether hyperhuman AIs would be more moral than humans as well as more intelligent.

The conclusion of his argument is, yes, these new brains will probably have a higher quality of ethical behaviour than humans have generally exhibited.  The final third of his book covers that topic, in a generally convincing way: he has a compelling analysis of topics such as free-will, self-awareness, conscious introspection, and the role of ethical frameworks to avoid destructive aspects of free-riders.  However, critically, it all depends on how these great brains are set up with regard to core purpose, and there are no easy answers.

Roko Mijic will be addressing this same topic in the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?” that it being held on Saturday 23rd January.  (If you use Facebook, you can RSVP here to indicate whether you’re coming.  NB it’s entirely optional to RSVP.)

2 January 2010

Vital for futurists: hacking the earth

Filed under: books, climate change, futurist, geoengineering — David Wood @ 1:16 am

Here’s a tip, for anyone seriously interested in the big issues that will dominate discussion in the next 5-10 years.  You should become familiar (if you’re not already) with the work of Jamais Cascio.  Jamais is someone who consistently has deep, interesting, and challenging things to say about the large changes that are likely to sweep over the planet in the decades ahead.

In 2003, Jamais co-founded WorldChanging.com, a website dedicated to finding and calling attention to models, tools and ideas for building a “bright green” future. In March, 2006, he started Open the Future.

One topic that Jamais has often addressed is geoengineering – sometimes also called “climate engineering”, “planetary engineering”, or “terraforming”.  Geoengineering covers a range of large-scale projects that could, conceivably, be deployed to head-off the effects of runaway global warming.  Examples include launching large mirrors into space to reflect sunlight away from the earth, injecting sulphate particles into the stratosphere, brightening clouds or deserts to increase their reflectivity, and extracting greenhouse gases from the atmosphere.  It’s a thoroughly controversial topic.  But Jamais treads skilfully and thoughtfully through the controversies.

A collection of essays by Jamais on the topic of geoengineering is available in book format, under the title “Hacking the earth: understanding the consequences of geoengineering“.  It’s a slim volume, with just over 100 pages, but it packs lots of big thoughts.  While reading, I found myself nodding in agreement throughout the book.

At present, this book is only available from Lulu.com.  As Jamais says, the book is, for him:

an experiment in self-publishing…

… in recent weeks various friends have tried out – and given high marks to – web-based self-publishing outfits like Lulu.com… I thought I’d give this method a shot.

The material in the book is derived from articles published online at Open the Future and elsewhere.  Some of the big themes are as follows (the following bullet points are all excerpts from Jamais’ writing):

  • Feedback effects ranging from methane released from melting permafrost to carbon emissions from decaying remnants of forests devoured by pine beetles risk boosting greenhouse gases faster than natural compensation mechanisms can handle.  The accumulation of non-linear drivers can lead to “tipping point” events causing functionally irreversible changes to geophysical systems (such as massive sea-level increases).  Some of these can have feedback effects of their own, such as the elimination of ice caps reducing global albedo, thereby accelerating heating.
  • None of the bright green solutions — ultra-efficient buildings and vehicles, top-to-bottom urban redesigns, local foods, renewable energy systems, and the like — will do anything to reduce the anthropogenic greenhouse gases that have already been emitted. The best result we get is stabilizing at an already high greenhouse gas level. And because of ocean thermal inertia and other big, slow climate effects, the Earth will continue to warm for a couple of decades even after we stop all greenhouse gas emissions. Transforming our civilization into a bright green wonderland won’t be easy, and under even the most optimistic estimates will take at least a decade; by the time we finally stop putting out additional greenhouse gases, we could well have gone past a point where globally disastrous results are inevitable. In fact, given the complexity of climate feedback systems, we may already have passed such a tipping point, even if we stopped all emissions today.
  • Geoengineering, should it be tried, would not be a replacement for making the economic, social, and technological changes needed to eliminate anthropogenic greenhouse gases. It would only be a way of giving us more time to make those changes. It’s not an either-or situation; geo is a last-ditch prop for making sure that we can do what needs to be done.
  • We don’t know enough about how the various geoengineering proposals would play out to make a persuasive case for trying any of them.  There needs to be far more study before making any even moderate-scale experimental effort. This is not something to try today. The most important task for current geoengineering research is to identify the approaches that might look attractive at first, but have devastating results — we need to know what we should avoid even if desperate.
  • Like it or not, we’ve entered the era of intentional geoengineering. The people who believe that (re)terraforming is a bad idea need to be part of the discussion about specific proposals, not simply sources of blanket condemnations. We need their insights and intelligence. The best way to make that happen, the best way to make sure that any terraforming effort leads to a global benefit, not harm, is to open the process of studying and developing geotechnological tools.
  • Geoengineering presents more than just an environmental question. It also presents a geopolitical dilemma. With processes of this magnitude and degree of uncertainty, countries would inevitably argue over control, costs, and liability for mistakes. More troubling, however, is the possibility that states may decide to use geoengineering efforts and technologies as weapons. Two factors make this a danger we dismiss at our peril: the unequal impact of climate changes, and the ability of small states and even nonstate actors to attempt geoengineering.
  • It is possible that, should the international community refrain from geoengineering strategies, one or more smaller, non-hegemonic, actors could undertake geoengineering projects of their own. This could be out of a legitimate fear that prevention and mitigation strategies would be insufficient, out of a disagreement with the consensus over geoengineering safety or results, or—most troublingly—out of a desire to use geoengineering tools to achieve a relative increase in competitive power over adversaries.

I particularly liked Jamais’ suggestion of a “Reversibility Principle” as an alternative to the “Precautionary Principle” and “Proactionary Principle” that have previously been suggested as guidelines for deciding which actions to take, regarding the application of technology.

Geoengineering is, by its nature, a huge topic.  The “Technology Review” magazine contains a substantial analysis entitled “The Geoengineering Gambit” in its Jan-Feb 2010 edition. And the authors of Freakonomics, Stephen J Dubner and Steven Levitt, included a chapter on geoengineering in their follow-up book, “Superfreakonomics“.  As it happens, there seems to be wide consensus that the freakonomics team were considerably too hasty in their analysis – see for example the Guardian article “Why Superfreakonomics’ authors are wrong on geo-engineering“.  But the fact that there were mistakes in that analysis doesn’t mean the topic itself should fade from view.

Far from it: I’m sure we’re going to be hearing more and more about geoengineering.  It deserves our attention!

31 December 2009

The constant economy

Filed under: books, Economics, green, leadership, market failure, vision — David Wood @ 2:54 pm

I’ve had mixed thoughts when reading Zac Goldsmith‘s “The constant economy: how to create a stable society” over the last few days.  It makes some useful contributions to an ultra-important debate.  However, the recommendations it makes frequently strike me as impractical.

Zac has been one of the advisors to the UK Conserative Party on environmental matters.  He is now the Conservative prospective parliamentary candidate for the Richmond Park constituency, which is adjacent to the one I live in.  It’s possible that his views on environmental matters will have a significant influence over the next UK government.

Some of the examples in the book made me think, “Gosh, I didn’t realise things were so bad; things can’t be left to go on like this“.  I had these thoughts when reading, for example, about the huge decline in fishing stocks worldwide, and about the enormous swathe of plastic waste in large parts of the Pacific Ocean.

Other parts, however, made me think, “Hang on, there’s another side to this story” – for example, for some of the incidents described in the chapter about the Precautionary Principle, and for the section about nuclear power.

This book is like a manifesto.  Mixed in with real-world anecdotes and analysis, each chapter contains a list of “Voter Demand Box” items.  For example, here’s the list from the chapter on “A zero waste economy”:

‘Take back’

People should have a legal ‘take back’ right enshrined in consumer law.  This would give everyone the right to take any packaging waste back to the shop it was bought from, and impose an obligation on retailers to recycle that waste once it was received.

Paying people to recycle

No more landfill

Using the right materials

Built to last

Government buying power

Incineration, a last resort

And from the chapter “An energy revolution”:

Find out the truth about oil

A cross-party taskforce should be established immediately to draw up a risk assessment.  It should not invite the traditional fuel industry to take part, as it would effectively be studying a risk scenario that says their maths is incorrect.  The taskforce should be required to publically report its findings within a year.

At the same time, we should also expect our government to put pressure on the UN or International Energy Authority to undertake a review of the world’s oil reserves.  If the economic models of every nation on earth are based on the assumption of everlasting oil supplies, it is reasonable that they should know how much oil actually exists.

Capture the heat

Reward the pioneers

Break the rules

Invest!

We urgently need a renewable energy fund to provide substantial grants for the research and development of radical new clean energy technologies.  From wave power to clean coal technology, potential solutions remain in the pipeline due to a lack of investment.  Government should provide that investment.  Diverting money that would otherwise be spent subsidizing fossil fuels or the nuclear energy could provide billions of pounds for research, support and, crucially, for upgrading the national grid.

Stop paying the polluters

Whilst there are elements of good sense to all (or nearly all) of these recommendations, this set of items needs a lot more work:

  • The items are uncosted, and generally open-ended;
  • It’s often unclear how the recommendations differ from policies and processes that are already in place;
  • There’s no prioritisation (everything is equally important);
  • There’s no roadmap (everything is equally urgent).

Despite this weakness, this book still has merit as a good conversation starter.

The book’s introduction provides a higher-level picture.  Here’s the opening paragraph:

The world is in trouble.  As human numbers expand and the resource-hungry economy grows, the natural environment is suffering an unprecedented assault.  Forests are shrinking, species are disappearing, oceans are emptying, land is turning to desert.  The climate itself is being thrown out of balance.  In just a few generations, we have created the biggest threat to the natural world since humanity evolved.  Unless something radical is done now, the world in which our children grow up will be less beautiful, less bountiful, more polluted and more uncertain than ever before.

The top-level recommendations in the book are, in effect:

1.) The need for first-class political leadership on environmental issues

We need political leaders who can free themselves from the constraints of pressure groups, whose vision extends far beyond the next election, and who can motivate strong constructive action (rather than just words):

Politicians in Britain, as elsewhere, can see the rising tide of concern over green issues, and in many cases know what solutions are required.  The environment has never been so high on the political agenda…

Yet few politicians are prepared to take the action needed.  Nothing happens.  Time ticks by, the situation becomes more urgent – and government does nothing.  Why?

Politicians are terrified of acting because they believe that tackling the looming crisis will involve restricting the electorates choices.  They believe that saving the planet means destroying the economy, and that neither business nor voters will stand for it.  They fear the headlines of a hostile media.  They fear, ultimately, for their jobs.  It always seems easier to do nothing – and to let the situation drift and hope that someone else takes the risk…

2.) The need to adapt market economics to properly respect environmental costs

Our defining challenge is to marry the environment with the market.  In other words, we need to reform those elements of our economy that encourage us to damage, rather than nurture, the natural environment.

The great strength of the market is its unique ability to meet the economic needs of citizens.  Its weakness is that it is blind to the value of the environment…

Other than nature itself, the market is also the most powerful force for change that we have.  The challenge we fact is to find ways to price the environment into our accounting system: to do business as if the earth mattered, and to make it matter not just as a moral choice but as a commercial imperative

Note: this is hardly a new message.  For one, Jonathon Porritt covered similar ground in his 2005 book (with a new edition in 2007), “Capitalism as if the world matters“.  However, Zac has a significantly simpler writing style, so his ideas may reach a wider audience – whereas I confess I twice got bogged down in the early stages of Jonathon’s book, and set it aside without reading further.

3.) The need for better use of market-based instruments such as taxation

We need to change the boundaries within which the market functions, by using well-targeted regulation.

Taxation is the best mechanism for pricing pollution and the use of scarce resources.  If tax shifts emphasis from good things like employment to bad things like pollution, companies will necessarily begin designing waste and pollution out of the way they operate…

The other major tool in the policymakers’ kit is trading.  Carbon emissions trading is a good example of a market-based approach which attaches a value to carbon emissions and ensures that buyers and sellers are exposed to this price.  As long as the price is high enough to influence decisions, it can work…

Note: it’s clear that the existing carbon trading scheme has lots of problems (as Zac describes, later in the book).  That’s a reason to push on quickly to a more effective replacement.

There’s also a latent worry over Zac’s confident recommendation:

It’s crucial that wherever money is raised on the back of taxing ‘bad’ activities is used to subsidise desirable activities.  For example, if a new tax is imposed on the dirtiest cars, it needs to be matched, pound for pound, on reductions in the price of the cleanest cars.

The complication is that once the higher taxation drives down usage of (in this example) the dirtiest cars, the amount of tax earned by the government will be reduced, and the “pound for pound” balance will break.  It’s another example of how the ideas in the book lack detailed financial planning.  Presumably Zac intends these details to be provided at a later stage.

4.) We need a fresh approach to regulation

Direct controls force polluting industries to improve their performance, and can eliminate products or practices that are particularly hazardous…  Markets without regulation would not have delivered unleaded petrol, for instance, or catalytic converters.  Without regulations requiring smokeless fuel, London’s smogs would still be with us.

This approach, however, needs to be effective.  With some products and processes, the regulatory bar needs to be raised internationally to avoid companies chasing the lowest standards globally.  We also need a change in our regulatory approach, away from an obsessive policing of processes towards a focus on outcomes.  If the regulatory system is too prescriptive, there is no room for innovation, and no real prospect of higher environmental standards…

5.) We need to measure what matters

Almost every nation on earth uses gross domestic product (GDP) to measure its economic growth.  The trouble is, expressed as a monetary value, GDP simply measures economic transactions, indiscrimately.  It cannot tell the difference between useful transactions and damaging ones…

Chopping down a rainforest and turning it into toilet paper increases GDP.  If crime escalates, the resulting investments in prisons and private security will add to GDP and be measured as ‘growth’.  When the Exxon Valdez oil tanker ran aground and spilt its vast load of oil on the pristine Alaskan shoreline, US GDP actually soared as legal work, media coverage and clean-up costs were all added to the national accounts…

US Senator Robert Kennedy said something similar:  “GDP does not allow for the health of our children, the quality of their education, or the joy of their play”, he said.  “It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials.  It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion to our country; it measures everything, in short, except that which makes life worthwhile.”

But the pursuit of economic growth, as measured by GDP, has been the overriding policy for decades, with the effect that the consequences have often been perverse…

A number of organisations have tried to assemble a new tool for measuring progress.  But the result is invariably a toolkit that is monstrous in its complexity and too impractical for any government to use.  A neater approach would be for the government to establish a wholly independent Progress Commission, staffed by experts from a wide variety of fields: economists, environmentalists, statisticians, academics, etc…

Whichever indicators are selected, the results would be handed each year to Parliament and the media.  The government would be required to respond…

Note: again, the suggested practical follow-up seems weaker than the analysis of the problem itself.  The economy has been ultra-optimised to pursue growth in GDP.  That’s how businesses are set up.  That’s going to prove very difficult to change.  Attention to non-financial matters is very likely to be squeezed.

However, it’s surely good to have the underlying problem highlighted once again.  Robert Kennedy’s stirring words ring as clearly today, as when they were first spoken: March 1968.

Let’s keep these words in mind, until we are confident that society is set up to pursue what matters, rather than simply to boost GDP.

Further reading: The book has its own website, with a blog attached.

« Newer PostsOlder Posts »

Blog at WordPress.com.