dw2

17 January 2010

Embracing engineering for the whole earth

Filed under: books, climate change, Genetic Engineering, geoengineering, green, Nuclear energy — David Wood @ 2:14 am

One thing I’m trying to do with my blog is to provide useful pointers, into the vast amount of material that’s available both online and offline, to the small small fraction of that material which does the best job of summarising, extending, and challenging current thinking.

Whole Earth Discipline: an ecopragmatist manifesto“, the recent book by veteran ecologist and environmentalist Stewart Brand, comprehensively fits that criterion.  It is so full of insight that virtually every page contains not just one but several blogworthy quotes, ideas, facts, putdowns, and/or refutations.  It’s that good.  I could write a book-length blogpost signing its praises.

Brand turned 70 while writing this book.  In the book, he indicates that he has changed his mind as he grew older.  The book serves as a landmark for various changes of mind for the environmental movement as a whole.  The argument is sustained, easy-to-read, detailed, and compelling.

The core argument is that the future well-being of the whole planet – human societies embedded in biological ecosystems – requires a thoroughgoing embrace of an engineering mindset.  Specifically, the environmental movement needs to recognise:

  • That the process of urbanisation – the growth of cities, even in apparently haphazard ways – provides good solutions to many worries about over-population;
  • That nuclear energy will play a large role in providing clean, safe, low-carbon energy;
  • That GE (genetic engineering) will play a large role in providing safe, healthy, nutritious food and medicine;
  • That the emerging field of synthetic biology can usefully and safely build upon what’s already being accomplished by GE;
  • That methods of geoengineering will almost certainly play a part in heading off the world’s pending climate change catastrophe.

The book has an objective and compassionate tone throughout.  At times it squarely accuses various environmentalists of severe mistakes – particularly in aspects of their opposition to GE and nuclear energy – mistakes that have had tragic consequences for developing societies around the world.  It’s hard to deny the charges.  I sincerely hope that the book will receive a wide readership, and will cause people to change their minds.

The book doesn’t just provide advocacy for some specific technologies.  More than that, it makes the case for changes in mindset:

  • It highlights major limitations to the old green mantra that “small is beautiful”;
  • It unpicks various romantic notions about the lifestyles and philosophies of native peoples (such as the American Indians);
  • It shows the deep weakness of the “precautionary principle”, and proposes an own alternative approach;
  • It emphasises how objections to people “playing God” are profoundly misguided.

Indeed, the book starts with the quote:

We are as gods and HAVE to get good at it.

It concludes with the following summary:

Ecological balance is too important for sentiment.  It requires science.

The health of the natural infrastructure is too compromised for passivity.  It requires engineering.

What we call natural and what we call human are inseparable.  We live one life.

And what is an engineer?  Brand states:

Romantics love problems; scientists discover and analyze problems; engineers solve problems.

As I read this book, I couldn’t help comparing it to “The constant economy” by Zac Goldsmith, which I read a few weeks ago.  The two books share many concerns about the unsustainable lifestyles presently being practiced around the world.  There are a few solutions in common, too.  But the wide distrust of technology shown by Goldsmith is amply parried by the material that Brand marshalls.  And the full set of solutions proposed by Brand are much more credible than those proposed by Goldsmith.  Goldsmith has been a major advisor to the UK Conservative Party on environmental matters.  If any UK party could convince me that they thoroughly understand, and intend to implement, the proposal in Brand’s book, I would be deeply impressed.

Note: an annotated reference companion to the book is available online, at www.sbnotes.com.  It bristles with useful links.  There’s also a 16 minute TED video, “Stewart Brand proclaims 4 environmental ‘heresies’“, which is well worth viewing.

Thanks to Marc Gunther, whose blogpost “Why Stewart Brand’s new book is a must-read” alerted me to this book.

By a fortunate coincidence, Brand will be speaking at the RSA in London on Tuesday.  I’m anticipating a good debate from the audience.  An audio feed from the meeting will be broadcast live.

13 January 2010

AI: why, and when

Filed under: AGI, usability — David Wood @ 4:26 pm

Here’s a good question, raised by Paul Beardow:

One question that always rattles around in my mind is “why are we trying to recreate the human mind anyway?” We have billions of those already…

You can build something that appears to be human, but what is the point of that? Why chase an goal that doesn’t actually provide us with more than we have already?

Paul also says,

What I don’t want is AI in products so that they have their own personality, but a better understanding of my own wishes and desires in how that product should interact with me…

I personally also really don’t think that logic by itself can lead to a system that can evolve human-like imagination, feelings or personality, nor that the human mind can be reduced to being a machine. It has elementary parts, but the constant rebuilding and evolving of information doesn’t really follow any logical rules that can be programmed. The structure of the brain depends on what happens to us during the day and how we interpret it according to the situation. That defies logic most of the time and is constantly evolving and changing.

My answer: there are at least six reasons why people are pursing the goal of human-like AI.

1. Financial savings in automated systems

We’re already used to encountering automated service systems when using the phone (eg to book a cinema ticket: “I think you’re calling about Kingston upon Thames – say Yes or No”) or when navigating a web form or other user interface.  These systems provoke a mixture of feelings in the people who use them.  I often become frustrated, thinking it would be faster to speak directly to a “real human being”.  But on other occasions the automation works surprisingly well.

To widen the set of applicability of such systems, into more open-ended environments, will require engineering much more human-style “common sense” into these automated systems.  The research to accomplish this may cost lots of money, but once it’s working, it could enable considerable cost savings in service provision, as real human beings can be replaced in a system by smart pieces of silicon.

2. Improving game play

A related motivation is as follows: games designers want to program in human-level intelligence into characters in games, so that these artificial entities manifest many of the characteristics of real human participants.

By the way: electronic games are big money!  As the description of tonight’s RSA meeting “Why games are the 21st century’s most serious business” puts it:

Why should we be taking video games more seriously?

  • In 2008 Nintendo overtook Google to become the world’s most profitable company per employee.
  • The South Korean government will invest $200 billion into its video games industry over the next 4 years.
  • The trading of virtual goods within games is a global industry worth over $10 billion a year.
  • Gaming boasts the world’s fastest-growing advertising market.

3. Improved user experience with complex applications

As well as reducing cost, human-level AI can in principle improve the experience of users while interacting with complex applications.

Rather than users thinking, “No you stupid machine, why don’t you realise what I’m trying to do…”, they will be pleasantly surprised: “Ah yes, that was in fact what I was trying to accomplish – how did you manage to figure that out?”

It’s as Paul says:

What I … want … in products [is]… a better understanding of my own wishes and desires in how that product should interact with me

These are products with (let’s say it) much more “intelligence” than at present.  They observe what is happening, and can infer motivation.  I call this AI.

4. A test of scientific models of the human mind

A different kind of motivation for studying human-level AI is to find ways of testing our understanding of the human mind.

For example, I think that creativity can be achieved by machines, following logical rules.  (The basic rules are: generate lots of ideas, by whatever means, and then choose the ideas which have interesting consequences.)  But it is good to test this.  So, computers can be programmed to mimic the possible thought patterns of great composers, and we can decide whether the output is sufficiently “creative”.

(There’s already quite a lot of research into this.  For one starting point, see the EE Times article “Composer harnesses artificial intelligence to create music“.)

Similarly, it will be fascinating to hear the views of human-level AIs about (for example) the “Top 5 Unsolved Brain Mysteries“.

5. To find answers to really tough, important questions

The next motivation concerns the desire to create AIs with considerably greater than human-level AI.  Assuming that human-level AI is a point en route to that next destination, it’s therefore an indirect motivation for creating human-level AI.

The motivation here is to ask superAIs for help with really tough, difficult questions, such as:

  • What are the causes – and the cures – for different diseases?
  • Are there safe geoengineering methods that will head off the threat of global warming, without nasty side effects?
  • What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
  • What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
  • What is the resolution of the conflict between theories of gravity and theories of all the other elementary forces?

6. To find ways of extending human life and expanding human experience

If the above answers aren’t sufficient, here’s one more, which attracts at least some researchers to the topic.

If some theories of AI are true, it might be possible to copy human awareness and consciouness from residence in a biological brain into residence inside silicon (or other new computing substrate).  If so, then it may open new options for continued human consciousness, without having to depend on the fraility of a decaying human body.

This may appear a very slender basis for hope for significantly longer human lifespan, but it can be argued that all the other bases for such hope are equally slender, if not even less plausible.

OK, that’s enough answers for “why”.  But about the question “when”?

In closing, let me quickly respond to a comment by Martin Budden:

I’m not saying that I don’t believe that there will be advances in AI. On the contrary I believe, in the course of time, there will be real and significant advances in “general AI”. I just don’t believe that these advances will be made in the next decade.

What I’d like, at this point, is to be able to indicate some kind of provisional roadmap (also known as “work breakdown”) for when stepping stones of progress towards AGI might happen.

Without such a roadmap, it’s too difficult to decide when larger steps of progress are likely.  It’s just a matter of different people appearing to have different intuitions.

To be clear, discussions of Moore’s Law aren’t sufficient to answer this question.  Progress with the raw power of hardware is one thing, but what we need here is an estimate of progress with software.

Sadly, I’m not aware of any such breakdown.  If anyone knows one, please speak up!

Footnote: I guess the best place to find such a roadmap will be at the forthcoming “Third Conference on Artificial General Intelligence” being held in Lugano, Switzerland, on 5-8 March this year.

Top of the list: the biggest impact

Filed under: books, democracy, Humanity Plus, UKTA — David Wood @ 12:46 am

I recently published a list of the books that had made the biggest impact on me, personally, over the last ten years.  I left one book out of that list – the book that impacted me even more than any of the others.

The book in question was authored by Dr James J. Hughes, a sociologist and bioethicist who teaches Health Policy at Trinity College in Hartford, Connecticut.  In his spare time, James is the Executive Director of the IEET – the Institute for Ethics and Emerging Technologies.

The title of the book is a bit of a mouthful:

71uc-pryp2l

I came across this book in October 2005.  The ideas in the book started me down a long path of further exploration:

I count this book as deeply impactful for me because:

  1. It was the book that led to so many other things (as just listed);
  2. When I look back at the book in 2010, I find several key ideas in it which I now take for granted (but I had forgotten where I learned them).

An indication of the ideas contained may be found from an online copy of the Introduction to the book, and from a related essay “Democratic Transhumanism 2.0“.

The book goes far beyond just highlighting the potential of new technologies – including genetic engineering, nanotechnology, cognitive science, and artificial intelligence – to significantly enhance human experience.  The book also contains a savvy account of the role of politics in supporting and enabling human change.

To quote from the Introduction:

This book argues that transhuman technologies – technologies that push the boundaries of humanness – can radically improve our quality of life, and that we have a fundamental right to use them to control our bodies and minds.  But to ensure these benefits we need to democratically regulate these technologies and make them equally available in free societies.  Becoming more than human can improve all our lives, but only new forms of transhuman citizenship and democracy will make us free, more equal, and more united.

A lot of people are understandably frightened by the idea of a society in which unenhanced humans will need to coexist with humans who are smarter, faster, and more able, not to mention robots and enhanced animals…

The “bioLuddite” opposition to genetic engineering, nanotechnology, and artificial intelligence, slowly building and networking since the 1960s, picked up where the anti-industrialisation Luddites left off in the nineteenth century.  While Luddites believed that defending workers’ rights required a ban on the automation of work, the bioLuddites believe genetic engineering and human enhancement technologies cannot be used safely, and must be banned…

The emerging “biopolitical” polarisation between bioLuddites and transhumanists will define twenty-first century politics…

People will be happiest when they individually and collectively exercise rational control of the social and natural forces that affect their lives.  The promise of technological liberation, however, is best achieved in the context of a social democratic society committed to liberty, equality, and solidarity…

Boing Boing author Cory Doctorow makes some good points in his review of “Citizen Cyborg”:

I’ve just finished a review copy of James Hughes’s “Citizen Cyborg: Why Democratic Societies Must respond to the Redesigned Human of the Future.” I was skeptical when this one arrived, since I’ve read any number of utopian wanks on the future of humanity and the inevitable withering away of the state into utopian anarchism fueled by the triumph of superior technology over inferior laws.

But Hughes’s work is much subtler and more nuanced than that, and was genuinely surprising, engaging and engrossing…

Hughes’s remarkable achievement in “Citizen Cyborg” is the fusion of social democratic ideals of tempered, reasoned state intervention to promote equality of opportunity with the ideal of self-determination inherent in transhumanism. Transhumanism, Hughes convincingly argues, is the sequel to humanism, and to feminism, to the movements for racial and gender equality, for the fight for queer and transgender rights — if you support the right to determine what consenting adults can do with their bodies in the bedroom, why not in the operating theatre?

Much of this book is taken up with scathing rebuttal to the enemies of transhumanism — Christian lifestyle conservatives who’ve fought against abortion, stem-cell research and gay marriage; as well as deep ecologist/secular lefty intelligentsia who fear the commodification of human life. He dismisses the former as superstitious religious thugs who, a few generations back, would happily decry the “unnatural” sin of miscegenation; to the latter, he says, “You are willing to solve the problems of labor-automation with laws that ensure a fair shake for working people — why not afford the same chance to life-improving techno-medicine?”

The humanist transhuman is a political stance I’d never imagined, but having read “Citizen Cyborg,” it seems obvious and natural. Like a lot of basically lefty geeks, I’ve often felt like many of my ideals were at odds with both the traditional left and the largely right-wing libertarians. “Citizen Cyborg” squares the circle, suggest a middle-path between them that stands foursquare for the improvement of the human condition through technology but is likewise not squeamish about advocating for rules, laws and systems that extend a fair opportunity to those less fortunate…

The transformation of politics Hughes envisions is from a two-dimensional classification to a three-dimensional classification.

The first two dimensions are “Economic politics” and “Cultural politics”, with a spectrum (in each case) from conservative to progressive.

The new dimension, which will become increasingly significant, is “Biopolitics”.  Hughes uses the label “bioLuddism” for the conservative end of this spectrum, and “Transhumanism” for the progressive end.

The resulting cube has eight vertices, which include both “Left bioLuddism” and “Right bioLuddism”, as well as both “Libertarian transhumanism” and “Democratic transhumanism”.

Interestingly, the ramp-up of political debate in the United Kingdom, ahead of the parliamentary election that will take place some time before summer, has served as a reminder that the “old” political divisions seem inadequate to deal with the challenges of the current day.  It’s harder to discern significant real differences between the major parties.  I still don’t have any strong views as to which party I should vote for.  My guess is that each of the major parties will contain a split of views regarding the importance of enhancement technologies.

I’ll give the final words to James Hughes – from the start of Chapter 7 in his book:

The most important disagreement between bioLuddites and transhumanists is over who we should grant citizenship, with all its rights and protections.  BioLuddites advocate “human-racism”, that citizenship and rights have something to do with simply having a human genome.  Transhumanists… believe citizenship should be based on “personhood”, having feelings and consciousness.  The struggle to replace human-racism with personhood can be found at the beginnings and ends of life, and at the imaginary lines between humans and animals, and between humans and posthumans.  Because they have not adopted the personhood view, the human-racists are disturbed by lives that straddle the imaginary human/non-human line.  But technological advances at each of these margins will force our society in the coming decades to complete the trajectory of 400 years of liberal democracy and choose “cyborg citizenship”.

11 January 2010

AI, buggy software, and the Singularity

Filed under: AGI, Singularity — David Wood @ 12:00 am

I recently looked at three questions about the feasibility of significant progress with AI.  I’d like to continue that investigation, by looking at four more questions.

Q4: Given that all software is buggy, won’t this prevent the creation of any viable human-level AI?

Some people with a long involvement with software aren’t convinced that we can write software of sufficient quality that is of the complexity required for AI at the human-level (or beyond).  It seems to them that complex software is too unreliable.

It’s true that the software we use on a day-by-day basis – whether on a desktop computer, on a mobile phone, or via a web server – tends to manifest nasty bugs from time to time.  The more complex the system, the greater the likelihood of debilitating defects in the interactions between different subcomponents.

However, I don’t see this observation as ruling out the development of software that can manifest advanced AI.  That’s for two reasons:

First, different software projects vary in their required quality level.  Users of desktop software have become at least partially tolerant of defects in that software.  As users, we complain, but it’s not the end of the world, and we generally find workarounds.  As a result, manufacturers release software even though there’s still bugs in it.  However, for mission-critical software, the quality level is pushed a lot higher.  Yes, it’s harder to create software with high-reliability; but it can be done.

There are research projects underway to bring significantly higher quality software to desktop systems too.  For example, here’s a description of a Microsoft Research project, which is (coincidentally) named “Singularity”:

Singularity is a research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We are building a research operating system prototype (called Singularity), extending programming languages, and developing new techniques and tools for specifying and verifying program behavior.

Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernel’s address space.

Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.

There would be a certain irony if techniques from the Microsoft Singularity project were used to create a high-reliability AI system that in turn was involved in the Technological Singularity.

Second, even if software has defects, that doesn’t (by itself) prevent it from it from being intelligent.  After all, the human brain itself has many defects – see my blogpost “The human mind as a flawed creation of nature“.  Sometimes we think much better after a good night’s rest!  The point is that the AI algorithms can include aspects of fault tolerance.

Q5: Given that we’re still far from understanding the human mind, aren’t we bound to be a long way from creating a viable human-level AI?

It’s often said that the human mind has deeply mysterious elements, such as consciousness, self-awareness, and free will.  Since there’s little consensus about these aspects of the human mind, it’s said to be unlikely that a computer emulation of these features will arrive any time soon.

However, I disagree that we have no understanding of these aspects of the human mind.  There’s a broad consensus among many philosophers and practitioners alike, that the main operation of the human mind is well explained by one or other variant of  “physicalism”.  As the Wikipedia article on the Philosophy of Mind states:

Most modern philosophers of mind adopt either a reductive or non-reductive physicalist position, maintaining in their different ways that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, especially in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences…

Reductive physicalists assert that all mental states and properties will eventually be explained by scientific accounts of physiological processes and states. Non-reductive physicalists argue that although the brain is all there is to the mind, the predicates and vocabulary used in mental descriptions and explanations are indispensable, and cannot be reduced to the language and lower-level explanations of physical science. Continued neuroscientific progress has helped to clarify some of these issues.

The book I mentioned previously, “Beyond AI” by J Storrs Hall, devotes several chapters to filling in aspects of this explanation.

It’s true that there’s still scope for head-scratching debates on what philosopher David Chalmers calls “the hard problem of consciousness”, which has various formulations:

  • “Why should physical processing give rise to a rich inner life at all?”
  • “How is it that some organisms are subjects of experience?”
  • “Why does awareness of sensory information exist at all?”
  • “Why is there a subjective component to experience?”…

However, none of these questions, by themselves, should prevent the construction of a software system that will be able to process questions posed in natural human language, and to give high quality humanly-understandable answers.  When that happens, the system will very probably seek to convince us that it has a similar inner conscious life to the one we have.  As J Storr Halls says, we’ll probably believe it.

Q6: Is progress with narrow fields of AI really relevant to the problem of general AI?

Martin Budden comments:

I don’t consider the advances in machine translation over the past decade an advance in AI, I more consider them the result of brute force analysis on huge quantities of text. I wouldn’t consider a car that could safely drive itself along a motorway an advance in AI, rather it would be the integration of a number of existing technologies. I don’t really consider the improvement of an algorithm that does a specific thing (search, navigate, play chess) an advance in AI, since generally such an improvement cannot be used outside its narrow field of application.

My own view is that these advances do help, in the spirit of “divide and conquer”.  I see the human mind as being made up of modules, rather than being some intractable whole.  Improving ability in, for example, translating text, or in speech recognition, will help set the scene for eventual general AI.

It’s true that some aspects of the human mind will prove harder to emulate than others – such as the ability to notice and form new concepts.  It may be the case that a theoretical breakthrough with this aspect will enable much faster overall progress, which will be able to leverage the work done on other modules.

Q7: With so many unknowns, isn’t all this speculation about AI futile?

It’s true that no one can predict, with any confidence, the date at which specific breakthrough advances in general AI are likely to happen.  The best that someone can achieve is a distribution of different dates with different probabilities.

However, I don’t accept any argument that “there’s been no fundamental breakthroughs in the last sixty years, so there can’t possibly be any fundamental breakthroughs in (say) the next ten years”.  That would be an invalid extrapolation.

That would be similar to the view expressed in 1903 by the distinguished astronomer and mathematician Simon Newcomb:

“Aerial flight is one of that class of problems with which man can never cope.”

Newcomb was no fool: he had good reasons for his scepticism.  As explained in the Wikipedia article about Newcomb:

In the October 22, 1903 issue of The Independent, Newcomb wrote that even if a man flew he could not stop. “Once he slackens his speed, down he begins to fall. Once he stops, he falls as a dead mass.” In addition, he had no concept of an airfoil. His “aeroplane” was an inclined “thin flat board.” He therefore concluded that it could never carry the weight of a man. Newcomb was specifically critical of the work of Samuel Pierpont Langley, who claimed that he could build a flying machine powered by a steam engine and whose initial efforts at flight were public failures…

Newcomb, apparently, was unaware of the Wright Brothers efforts whose [early] work was done in relative obscurity.

My point is that there does not seem to be any valid fundamental reason why the functioning of a human mind cannot be emulated via software; we may be just two or three good breakthroughs away from solving the remaining key challenges.  With the close attention of many commercial interests, and with the accumulation of fragments of understanding, the chances improve of some of these breakthroughs happening sooner rather than later.

10 January 2010

The most impactful books of the last decade

Filed under: books — David Wood @ 2:08 am

My love affair with books and book reviews grows out of my admiration for the collective knowledge, insight, and wisdom that human civilsation is accumulating.  Knowledge grows through the process of books being written, reviewed, criticised, and (where appropriate) treasured.

Over the last ten years, many books have struck me in various ways, causing me to revise key parts of my own worldview, or to gain valuable new ways of making sense of trends in business, technology, society, and culture.  In short, these books have made me wiser.

In this posting, to mark the start of a new decade, I list the books I remember as making the biggest impact on me personally, over the last ten years.  They’re all books that stand the test of time, with continued relevance.

I’m saving the single most impactful book for a later posting – it’s excluded from the list below.  I’ve arranged the remaining books below in alphabetical order by title.

All but two of the books are non-fiction.

Breaking Windows: How Bill Gates Fumbled the Future of Microsoft – by David Bank

This book is probably as relevant today – when people are pondering the seeming declining influence of Microsoft – as it was in the days near the start of the last decade, when I first read it.  I remember a sense of wonder at how Microsoft placed so much emphasis on one principle – emphasising the Windows APIs and brand presence – at the cost of significantly hurting other products which Microsoft could have developed.  That’s the sense in which, according to the author, the future of Microsoft was “fumbled”.

Microsoft’s relatively poor performance in the smartphone OS world can, arguably, be traced to decisions that are documented in this book.

Darwin’s Cathedral: Evolution, Religion, and the Nature of Society – by David Sloan Wilson

This book has sweeping scope, but makes its case very well.  The case is that religion has in general survived inasmuch as it helped groups of people to achieve greater cohesion and thereby acquire greater fitness compared to other groups of people.  This kind of religion has practical effect, independent of whether or not its belief system corresponds to factual reality.  (It can hardly be denied that, in most cases, the belief system does not correspond to factual reality.)

The book has some great examples – from the religions in hunter-gatherer societies, which contain a powerful emphasis on sharing out scarce resources completely equitably, through examples of religions in more complex societies.  The chapter on John Calvin was eye-opening (describing how his belief system brought stability and prosperity to Geneva) – as were the sections on the comparative evolutionary successes of Judaism and early Christianity.  But perhaps the section on the Balinese water-irrigation religion is the most fascinating of the lot.

Of course, there are some other theories for why religion exists (and is so widespread), and this book gives credit to these theories in appropriate places.  However, this pro-group selection explanation has never before been set out so carefully and credibly, and I think it’s no longer possible to deny that it plays a key role.

The discussion makes it crystal clear why many religious groups tend to treat outsiders so badly (despite treating insiders so well).  It also provides a fascinating perspective on the whole topic of “forgiveness”.  Finally, the central theme of “group selection” is given a convincing defence.

Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime – by Aubrey de Grey and Michael Rae

Does it make technological and political sense that there are people alive today who could live as long as one thousand years?  Or are any such ideas just naive and irresponsible fantasy?

This book probably provides the most serious positive answer to this question.  In part, it comes across as a polemic, but the large majority is a set of suggestive ideas.

There is no definite prescription in this book for how to “end aging”, but it contains many details of a program which, if given more research priority, could well come up with such a prescription, within a small number of decades.

Some of the technical suggestions can be criticised, but no doubt the program, if it gets properly underway, will generate additional ideas and workarounds.

As ideas go, this is about as big as it gets.

First, Break All the Rules: What the World’s Greatest Managers Do Differently – by Marcus Buckingham and Curt Coffman

Two things still stand out in my mind about this book, many years after I first read it:

  1. The sad but oh-so-often true remark that “people join companies but leave managers”, meaning that bad managers frequently cause good employees to want to leave the company;
  2. The famous list of the “G12” or “Gallup 12 questions” that can be asked of employees, on a regular basis, to give a reliable indication of the degree of engagement and enthusiasm of the employees.

The G12 questions don’t cover things like salary or stock options, but instead address “Do I know what’s expected of me at work” and “Do I have all the equipment and materials I need to do my work right”, etc.  In short, to be a great manager, you need to “break a few rules” of conventional wisdom, and focus instead on getting your employees to be able to give positive answers to more and more of these G12 questions.  And it’s all backed up by lots of Gallup research.  As such, this book is perhaps the single best guide on what managers should be doing, to improve the working environment of their employees.

Good to Great: Why Some Companies Make the Leap… and Others Don’t – by Jim Collins

This book distills a number of very important principles that are likely to make a big difference to the long-term growth of a company.  Each chapter is a gem.

For example, I still vividly remember the principles of:

  • A “level 5” CEO, who combines “personal humility and professional will”;
  • Get the right people on the bus (and the wrong people off the bus);
  • The culture of realistic optimism: “confront the brutal facts”;
  • Find the hedgehog principle for your company – simplicity within three circles.

With the passage of time since the book was written, some of the companies described in it as “great” have suffered reverses of fortune.  However, this doesn’t diminish the value of the advice.

Heat: How to Stop the Planet from Burning – by George Monbiot

This was the book which helped move me from “concerned about climate change” to “deeply concerned about climate change”.  It combines passionate urgency with an incisive critical evaluation of numerous options.

The author demonstrates lots of fury but also lots of serious ideas.  It’s hard to remain unmoved while reading this.

How Markets Fail: The Logic of Economic Calamities – by John Cassidy

Free markets have been a tremendous force for progress.  However, they need oversight and regulation.  Lack of appreciation of this point is the fundamental cause of the Great Crunch that the world financial systems recently experienced.  That’s the essential message of this important book.

I call this book “important” because it contains a sweeping but compelling survey of a notion Cassidy dubs “Utopian economics”, before providing layer after layer of decisive critique of that notion.  As such, the book provides a very useful guide to the history of economic thinking, covering Adam Smith, Friedrich Hayek, Milton Friedman, John Maynard Keynes, Arthur Pigou, Hyman Minsky, and many, many others.

The key theme in the book is that markets do fail from time to time, potentially in disastrous ways, and that some element of government oversight and intervention is both critical and necessary, to avoid calamity.  This theme is hardly new, but many people resist it, and the book has the merit of marshalling the arguments more comprehensively than I have seen elsewhere.  See my review here.

Influencer: The Power to Change Anything – by Kerry Patterson et al

This book starts by noting that we are, in effect, too often resigned to a state of helplessness, as covered by the “acceptance clause” of the so-called “serenity prayer” of Reinhold Niebuhr:

God grant me the serenity
To accept the things I cannot change;
Courage to change the things I can;
And wisdom to know the difference

What we lack, the book says, is the skillset to be able to change more things.  It’s not a matter of exhorting people to “try harder”.  Nor is a matter that we need to become better in talking to people, to convince them of the need to change.  Instead, we need a better framework for how influence can be successful.

Part of the framework is to take the time to learn about the “handful of high-leverage behaviors” that, if changed, would have the biggest impact.  This is a matter of focusing – leaving out many possibilities in order to target behaviours with the greatest leverage.  Another part of the framework initially seems the opposite: it recommends that we prepare to use a large array of different influence methods (all with the same intended result).  These influence methods start by recognising the realities of human reasoning, and works with these realities, rather than seeking to drastically re-write them.

The framework describes six sources of influence, in a 2×3 matrix.  One set of three sources addresses motivation, and the other set of three addresses capability.  In each case, there are personal, social, and structural approaches (hence the 2×3).  The book has a separate chapter for each of these six sources.  Each chapter is full of good material.

As I worked through chapter after chapter, I kept thinking “Aha…” to myself.  The material is backed up by extensive academic research by change specialists such as Albert Bandura and Brian Wansink.  There are also numerous references to successful real-life influence programs, such as the eradication of guinea worm diseasee in sub-saharan Africa, controlling AIDS in Thailand, and the work of Mimi Silbert of Delancy Street with “substance abusers, ex-convicts, homeless and others who have hit bottom”.

Leading Change – by John Kotter

This is the definitive account of why so many change initiatives in organisations go astray – and what we can do to avoid repeating the same mistakes.

As Kotter describes, the eight reasons why change initiatives fail are:

  1. Lack of a sufficient sense of urgency;
  2. Lack of an effective guiding coalition for the change (an aligned team with the ability to make things happen);
  3. Lack of a clear appealing vision of the outcome of the change (otherwise it may seem too vague, having too many unanswered questions);
  4. Lack of communication for buy-in, keeping the change in people’s mind (otherwise people will be distracted back to other issues);
  5. Lack of empowerment of the people who can implement the change (lack of skills, wrong organisational structure, wrong incentives, cumbersome bureaucracy);
  6. Lack of celebration of small early wins (failure to establish momentum);
  7. Lack of follow through (it may need wave after wave of change to stick);
  8. Lack of embedding the change at the cultural level (otherwise the next round of management changes can unravel the progress made).

See my review here for more details.

Leading Lean Software Development: Results Are not the Point – by Mary and Tom Poppendieck

The Poppendiecks have co-authored three pioneering books on Lean Software Development, describing the application of “lean” manufacturing thinking (pioneered at Toyota in Japan) to software development.  Each of the books has been full of practical insight, that often initially strikes the reader as counter-intuitive, before the bigger picture sets in.  And the bigger picture is what’s important.  Here’s an example principle:

“The biggest cause of failure in software-intensive systems is not technical failure; it’s building the wrong thing” – Mary Poppendieck

Mary and Tom travel the world consulting and presenting on their ideas, and each new book benefits from two or three extra years of the ideas being reviewed, elaborated, and refined.  The one I’ve picked for this collection is the most recently published.

Made to Stick: Why Some Ideas Survive and Others Die – by Chip Heath and Dan Heath

This may be the best book on communications and presentations that I have ever read.

It’s full of compelling explanations about how to ensure that your messages are thoroughly memorable.  Messages should be:

  • Simple,
  • Unexpected,
  • Concrete,
  • Credible,
  • Emotional,
  • Stories.

Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies – and What It Means to Be Human – by Joel Garreau

This is probably one of the best books written about transhumanist themes.  It makes a fine job of introducing numerous personalities and ideas.

The book is arranged in three parts:

  • “Heaven” – mainly concentrating on the potential upside of the radical application of technology to humans
  • “Hell” – analysing the potential downside of radical new technologies
  • “Prevail” – mapping out a plausible (and exciting) “middle of the road” scenario.

Scaling Software Agility: Best Practices for Large Enterprises – by Dean Leffingwell

There’s been lots of talk about how to make Agile practices work in larger software development projects.  This book is the best account I’ve seen of how to make this work in practice.

The material is very clear, covering:

  • A summary of Agile principles, viewed from the advantage of passage of time since they were first introduced;
  • A description of what happens to these principles, when they need to be applied in larger projects;
  • A series of additional methods, which can be introduced to support the application of agile principles in larger projects.

Many of the development practices inside Symbian Software Ltd were influenced by this book.

Schrödinger’s Rabbits: The Many Worlds of Quantum – by Colin Bruce

This book covers the same subject as I was researching for my doctoral studies in the philosophy of science during the 1980’s.  Reading the book a few years back, it reinforced my views that:

  • The question of the interpretation of quantum mechanics remains controversial and difficult, being beset with problems, more than 80 years after the subject was introduced;
  • The so-called “many worlds” interpretation of quantum mechanics has (despite strong first impressions to the contrary) the best potential to solve these problems;
  • Nevertheless, the “many worlds” interpretation still faces many issues and challenges of its own.

One day I may return to this subject 🙂

For more details, see here.

The Age of Spiritual Machines: When Computers Exceed Human Intelligence – by Ray Kurzweil

This is an intellectual romp, through the 21st century, decade by decade, looking at the likely evolution of the interactions between computers and humans. There’s some mind-boggling ideas (in both senses of the phrase “mind-boggling”, since what may well happen to the human mind in the future is, well, mind-boggling).

Kurzweil wrote this book several years before his later book, “The singularity is near”.  Of the two, I much prefer “The age of spiritual machines”.

The End of Faith: Religion, Terror, and the Future of Reason – by Sam Harris

Over the last few years, I’ve read a string of books by the so-called “new atheists”, and learned from each of them.

The audio recording of “Letting go of God” by Julia Sweeny is probably the most touching and personal of them all.  It’s both funny and moving.  However, in terms of seriousness of purpose, I pick the Sam Harris book as being particularly significant.

It’s not just a book with reasons to be intellectually distrustful of religious faith.  It’s a book about why dreadful actions arising from religious faith (such as the “terror” mentioned in the title) are likely to continue happening – and can be expected to become even worse, in an age where “weapons of mass destruction” abound.  The book makes a strong case that faith, itself, is the most dangerous element of modern life.

The First Immortal: A Novel Of The Future – by James Halperin

The writing in this book is sometimes a bit laboured, but the central ideas are extremely well worked out.

It looks at cryonics and revival – very low temperature preservation of the human body until such time in the future as the diseases that were killing the body can be cured. It looks at these themes from numerous angles, through the different characters in the book, and their changing attitudes.

Some of the ideas and episodes are very vivid indeed, and remain clearly in my mind now, quite a few years after I read the book.

I understand that the author conducted thorough research into the technology of cryonics, in order to make the account scientifically credible. The effort has paid off – this is a plausible (though mind-jarring) account.  It made me take cryonics much more seriously.

The Future of Management – by Gary Hamel

This is perhaps the best book I’ve read on innovation – and the best book I’ve read on desirable management culture.

I’ll cast my vote any day for the kind of pro-innovation pro-enablement management culture Hamel describes. It’s the approach that has great potential to motivate key employees.

It includes chapters on the remarkable management cultures at Whole Foods Market, W.L. Gore (makers of Gore-Tex etc), and a “small little upstart” called Google.

Here’s a quote from near the start of the book: “To thrive in an increasingly disruptive world, companies must become as strategically adaptable as they are operationally efficient”.

And here’s one from around 20% of the way in: “if you want to capture the economic high ground in the creative economy, you need employees who are more than acquiescent, attentive, and astute – they must also be zestful, zany, and zealous. So we must ask: what are the obstacles that stand in the way of achieving this state of organisational bliss?”

The rest of the book provides answers to this question.  It highlights the very large difference to a company’s success that can be made by the management culture that is in place (note: what matters is the management culture as it is actually practised, rather than what is espoused).

The Goal – by Eliyahu Goldratt

This is a novel written to illustrate the ideas in the “theory of constraints”.

As fiction, the story has its touching moments.  As an introduction to a set of powerful ideas on identifying and dealing with bottlenecks, it’s a huge wake-up call.  It shows that a large amount of effort in improving systems is actually wasted.

The Hacker Ethic – by Pekka Himanen

This book provides an illuminating contrast between the so-called “Protestant work ethic” and the emerging “Hacker ethic” which is increasingly widely followed nowadays.

If the former is characterised by the seven values

  • Money, Work, Optimality, Flexibility, Stability, Determinacy, and Result accountability,

the latter is characterised by this seven:

  • Passion, Freedom, Social worth, Openness, Activity, Caring, Creativity

This book provided my first big push towards the methods of open source development.

The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom – by Jonathan Haidt

The stated purpose of the book is to consider “ten great ideas” about morality and ethics, drawn from Eastern and Western religious and philosophical traditions, and to review these ideas in the light of the latest scientific findings about the human condition. Initially, I was sceptical about how useful such an exercise might be. But the book quickly led me to set aside my scepticism. The result is greater than the sum of the ten individual reviews, since the different ideas overlap and reinforce.

Haidt declares himself to be both an atheist and a liberal, but with a lot of sympathy for what both theists and conservatives try to hold dear. In my view, he does a grand job of bridging these tough divides.

Haidt seems deeply familiar with a wide number of diverse traditional thinking systems, from both East and West. He also shows himself to be well versed in many modern (including very recent) works on psychology, sociology, and evolutionary theory. The synthesis is frequently remarkable. I found myself re-thinking lots of my own worldwide.

See here for my fuller review of this book.

The Knowing-Doing Gap: How Smart Companies Turn Knowledge into Action – by Jeffrey Pfeffer and Robert Sutton

This book does a demolition job on the “smart talk” culture that often prevails in companies which fail to act on the knowledge they already possess.

The book addresses the question:

“Why is it that, at the end of so many books and seminars, leaders report being enlightened and wiser, but not much happens in their organizations?”

As I reported in a previous posting, my takeaway from the book was the following set of five characteristics of companies that can successfully bridge this vicious “Knowing Doing Gap”:

  1. They have leaders with a profound hands-on knowledge of the work domain;
  2. They have a bias for plain language and simple concepts;
  3. They encourage solutions rather than inaction, by framing questions asking “how”, not just “why”;
  4. They have strong mechanisms that close the loop – ensuring that actions are completed (rather than being forgotten, or excuses being accepted);
  5. They are not afraid to “learn by doing”, and thereby avoid analysis paralysis.

The Power of Full Engagement: Managing Energy, Not Time, Is the Key to High Performance and Personal Renewal – by Jim Loehr and Tony Schwartz

This book came as a surprise to me, but its message was (on reflection) undeniable.  Namely, rather than worry much about time management, it’s energy management that should attract our big attention.

The book contains some heart-stopping stories, which should resonate strongly with many “busy people”.

The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture – by John Battelle

The book gave me, not just a deeper respect for many aspects of Google, but an appreciation of how online Search is going to have incredibly far-reaching implications.  Search is truly a large topic.

The book contains numerous fascinating anecdotes about the founders of Google, as well as other key silicon valley figures.  But the fullest value in the book is in the analysis it provides.

The Slow Pace of Fast Change: Bringing Innovations to Market in a Connected World – by Bhaskar Chakravorti

This is the book that introduced the whimsically-named but important notion of “demi Moore’s Law” – the idea that products which are parts of inter-connected networks often progress at a pace roughly half of what would be expected, based solely on hardware considerations – such as the rate of improvement described by [Gordon] Moore’s Law.

The reason for the discrepancy is that a product can only be accepted (and them embellished) once a series of related changes are made in associated products.  For example, improvements to mobile handsets were historically linked to improvements in mobile networks, and/or improvements in mobile applications.

However, as the book explains, the impact of these inter-connections isn’t always to slow change down.  Once the previous network ecosystem has been dismantledand a new one established in its place (which can take a long time), product development within the new ecosystem can often proceed even faster than would be predicted by Moore’s Law alone.

The Starfish and the Spider: The Unstoppable Power of Leaderless Organizations – by Ori Brafman and Rod Beckstrom

This book is a sober but enlightening account of the issues of centralisation (“spider”) vs. decentralisation (“starfish”), as well as suitable mixtures of the two.

The book also shows why there’s a great deal at stake behind this contrast: issues of commercial revenues, the rise and fall of businesses, and the rise and fall of change movements within society – where the change movements include such humdingers as Slave Emancipation, Sex Equality, Animal Liberation, and Al Qaeda.

There are many stories running through the book, chosen both from history and from contemporary events.  The stories are frequently picked up again from chapter to chapter, with key new insights being drawn out.  Some of the stories are familiar and others are not.  But the starfish/spider framework casts new light on them all.

Each chapter brought an important additional point to the analysis.  For example: factors allowing de-centralised organisations to flourish; how centralised organisations can go about combatting de-centralised opponents; issues about combining aspects of both approaches.  (The book argues that smart de-centralisation moves by both GE and Toyota are responsible for significant commercial successes in these companies.)

The book also spoke personally to me.  As it explains, starfish organisations depend upon so-called “catalyst” figures, who lack formal authority, and who are prepared to move into the background without clinging to power.  There’s a big difference between catalysts and CEOs.  Think “Mary Poppins” rather than “Maria from Sound of Music”.  That gave me a handy new way of thinking about my own role in organisations.  (I’m like Mary Poppins, rather than Maria!)

The Success of Open Source – by Steven Weber

This book provides a thorough and wide-ranging analysis of the circumstances in which open source software methods succeed.

It goes far beyond technical considerations, and also looks at motivational, economic, and governance issues.  It shows that the sucess of open source is no “accident” or “mystery”.

The Upside of Down: Catastrophe, Creativity, and the Renewal of Civilization – by Thomas Homer-Dixon

A sweeping and convincing summary of the very pressing amalgam of deep problems facing the future of civilisation.

The basic themes are that:

  • The major problems facing us are intertwined, and the interconnections make things worse;
  • Things are unlikely to become better before they first become worse (and shake us into action in the process).

The Wisdom of Crowds – by James Surowiecki

On many occasions, crowds are bad for rationality.  A kind of lowest demoninator outcome results.  This is the “madness of crowds”.

But on other occasions, a crowd can end up being, collectively, significantly smarter than even the smartest individuals inside that crowd.

What’s the difference between the two sets of occasions?

This book explains, and in so doing, provides the intellectual grounding for lots of contemporary ideas about the benefits of openness and community.

Will & Vision: How Latecomers Grow to Dominate Markets – by Gerard Tellis and Peter Golder

A lot of thinking about business strategy is dominated by the idea of “first mover’s advantage”, which holds that the first company into a product category has a very significant advantage over challengers.

This book by Tellis and Golder contains a devastating refutation of that idea – both from an empirical (example-based) point of view and a theoretical analysis.

As such, it gives plenty of encouragement to potential new market entrants, and poses grounds for anxiety for current market leaders.

World on Fire: How Exporting Free Market Democracy Breeds Ethnic Hatred and Global Instability – by Amy Chua

This book is a shocking but compelling run-through of many examples from around the world where the application of free market ideas, in parallel with the introduction of democracy, often backfires: commercially successful minorities become the victims of fierce discrimination and violent reprisals.

As such, the book is a necessary antidote to any overly naive idea that successful methods from mature market democracies can simply be transplanted, overnight (as it were), to parts of the world with very different background cultures.

9 January 2010

Progress with AI

Filed under: AGI, books, m2020, Moore's Law, UKH+ — David Wood @ 9:47 am

Not everyone shares my view that AI is going to become a more and more important field during the coming decade.

I’ve received a wide mix of feedback in response to:

  • and my comments made in other discussion forums about the growth of AI.

Below, I list some of the questions people have raised – along with my answers.

Note: my answers below are informed by (among other sources) the 2007 book “Beyond AI: creating the conscience of the machine“, by J Storrs Hall, that I’ve just finished reading.

Q1: Doesn’t significant progress with AI presuppose the indefinite continuation of Moore’s Law, which is suspect?

There are three parts to my answer.

First, Moore’s Law for exponential improvements in individual hardware capability seems likely to hold for at least another five years, and there are many ideas for new semiconductor innovations that would extend the trend considerably further.  There’s a good graph of improvements in supercomputer power stretching back to 1960 on Shane Legg’s website, along with associated discussion.

Dylan McGrath, writing in EE Times in June 2009, reported views from iSuppli Corp that “Equipment cost [will] hinder Moore’s Law in 2014“:

Moore’s Law will cease to drive semiconductor manufacturing after 2014, when the high cost of chip manufacturing equipment will make it economically unfeasible to do volume production of devices with feature sizes smaller than 18nm, according to iSuppli Corp.

While further advances in shrinking process geometries can be achieved after the 20- to 18nm nodes, the rising cost of chip making equipment will relegate Moore’s Law to the laboratory and alter the fundamental economics of the semiconductor industry, iSuppli predicted.

“At those nodes, the industry will start getting to the point where semiconductor manufacturing tools are too expensive to depreciate with volume production, i.e., their costs will be so high, that the value of their lifetime productivity can never justify it,” said Len Jelinek, director and chief semiconductor manufacturing iSuppli, in a statement.

In other words, it remains technological possible that semiconductors can become exponentially denser even after 2014, but it is unclear that sufficient economic incentives will exist for these additional improvements.

As The Register reported the same story:

Basically, just because chip makers can keep adding cores, it doesn’t mean that the application software and the end user workloads that run on this iron will be able to take advantage of these cores (and their varied counts of processor threads) because of the difficulty of parallelising software.

iSuppli is not talking about these problems, at least not today. But what the analysts at the chip watcher are pondering is the cost of each successive chip-making technology and the desire of chip makers not to go broke just to prove Moore’s Law right.

“The usable limit for semiconductor process technology will be reached when chip process geometries shrink to be smaller than 20 nanometers (nm), to 18nm nodes,” explains Len Jelinek…

At that point, says Jelinek, Moore’s Law becomes academic, and chip makers are going to extend the time they keep their process technologies in the field so they can recoup their substantial investments in process research and semiconductor manufacturing equipment.

However, other analysts took a dim view of this pessimistic forecast, and maintain that Moore’s Law will be longer lived.  For example, In-Stat’s chief technology strategist, Jim McGregor, offered the following rebuttal:

…every new technology goes over some road-bumps, especially involving start-up costs, but these tend to drop rapidly once moved into regular production. “EUV [extreme ultraviolet] will likely be the next significant technology to go through this cycle,” McGregor told us.

McGregor did concede that the lifecycle of certain technologies is being extended by firms who are in some cases choosing not to migrate to every new process node, but he maintained new process tech is still the key driver of small design geometries, including memory density, logic density, power consumption, etc.

“Moore’s Law also improves the cost per device and per wafer,” added McGregor, who also noted that “the industry has and will continue to go through changes because of some of the cost issues.” These include the formation of process development alliances, like IBM’s alliances, the transition to foundry manufacturing, and design for manufacturing techniques like computational lithography.

“Many people have predicted the end of Moore’s Law and they have all been wrong,” sighed McGregor. The same apparently goes for those foolhardy enough to attempt to predict changes in the dynamics of the semiconductor industry.

“There have always been challenges to the semiconductor technology roadmap, but for every obstacle, the industry has developed a solution and that will continue as long as we are talking about the hundreds of billion of dollars in revenue that are generated every year,” he concluded.

In other words, it is likely that, given sufficient economic motivation, individual hardware performance will continue improving, at a significant rate (if, perhaps, not exponentially) throughout the coming decade.

Second, it remains an open question as to how much hardware would be needed, to host an Artificial (Machine) Intelligence (“AI”) that has either human-level or hyperhuman reasoning power.

Marvin Minsky, one of the doyens of AI research, has been quoted as believing that computers commonly available in universities and industry already have sufficient power to manifest human-level AI – if only we could work out how to program them in the right way.

J. Storr Hall provides an explanation:

Let me, somewhat presumptuously, attempt to explain Minsky’s intuition by an analogy: a bird is our natural example of the possibility of heavier-than-air flight. Birds are immensely complex: muscles, bones, feathers, nervous systems. But we can build working airplanes with tremendously fewer moving parts. Similarly, the brain can be greatly simplified, still leaving an engine capable of general conscious thought.

Personally, I’m a big fan of the view that the right algorithm can make a tremendous difference to a computational task.  As I noted in a 2008 blog post:

Arguably the biggest unknown in the technology involved in superhuman intelligence is software. Merely improving the hardware doesn’t necessarily mean the the software performance increases to match. As has been remarked, “software gets slower, more rapidly than hardware gets faster”. (This is sometimes called “Wirth’s Law”.) If your algorithms scale badly, fixing the hardware will just delay the point where your algorithms fail.

So it’s not just the hardware that matters – it’s how that hardware is organised. After all, the brains of Neanderthals were larger than those of humans, but are thought to have been wired up differently to ours. Brain size itself doesn’t necessarily imply intelligence.

But just because software is an unknown, it doesn’t mean that hardware-driven predictions of the onset of the singularity are bound to be over-optimistic. It’s also possible they could be over-pessimistic. It’s even possible that, with the right breakthroughs in software, superhuman intelligence could be supported by present-day hardware. AI researcher Eliezer Yudkowsky of the Singularity Institute reports the result of an interesting calculation made by Geordie Rose, the CTO of D-Wave Systems, concerning software versus hardware progress:

“Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or an 1977 computer, the Apple II, running a 2007 algorithm? Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years…

“[For exploring new AI breakthroughs] I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970’s theory and a Blue Gene.”

Here’s a related example.  When we think of powerful chess-playing computers, we sometimes think that massive hardware resources will be required, such as a supercomputer provides.  However, as long ago as 1985, Psion, the UK-based company I used to work for (though not at that time), produced a piece of software that played what many people thought, at the time (and subsequently) to be a very impressive quality of chess.  See here for some discussion and some reviews.  Taking things even further, this article from 1983 describes an implementation of chess, for the Sinclair ZX-81, in only 672 bytes – which is hard to believe!  (Thanks to Mark Jacobs for this link.)

Third, building on this point, progress in AI can be described as a combination of multiple factors:

  1. Individual hardware power
  2. Compound hardware power (when many different computers are linked together, as on a network)
  3. Software algorithms
  4. Number of developers and researchers who are applying themselves to the problem
  5. The ability to take advantage of previous results (“to stand on the shoulders of giants”).

Even if the pace slows for improvements in the hardware of individual computers, it’s still very feasible for improvements in AI to take place, on account of the other factors.

Q2: Hasn’t rapid progress with AI often been foretold before, but with disappointing outcomes each time?

It’s true that some of the initial forecasts of the early AI research community, from the 1950’s, have turned out to be significantly over-optimistic.

For example, in his famous 1950 paper “Computing machinery and intelligence” – which set out the idea of the test later known as the “Turing test” – Alan Turing made the following prediction:

I believe that in about fifty years’ time it will be possible, to programme computers… to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification [between a computer answering, or a human answering] after five minutes of questioning.

Since the publication of that paper, some sixty years have now passed, and computers are still far from being able to consistently provide an interface comparable (in richness, subtlety, and common sense) to that of a human.

For a markedly more optimistic prediction, consider the proposal for the 1956 Dartmouth Summer Research Conference on Artificial Intelligence which is now seen, in retrospect, as the the seminal event for AI as a field.  Attendees at the conference included Marvin Minsky, John McCarthy, Ray Solomonoff, and Claude Shannon.  The group came together with the following vision:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

The question for us today is: what reason is there to expect rapid progress with AI in (say) the next ten years, given that similar expectations in the past failed – and, indeed, the whole field eventually fell into what is known as an “AI winter“?

J Storrs Hall has some good answers to this question.  They include the following:

First, AI researchers in the 1950’s and 60’s laboured under a grossly over-simplified view of the complexity of the human mind.  This can be seen, for example, from another quote from Turing’s 1950 paper:

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer’s. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.

Progress in brain sciences in the intervening years has highlighted very significant innate structure in the child brain.  A child brain is far from being a blank notebook.

Second, early researchers were swept along on a wave of optimism from some apparent early successes.  For example, consider the “ELIZA” application that mimicked the responses of a certain school of psychotherapist, by following a series of simple pattern-matching rules.  Lay people who interacted with this program frequently reported positive experiences, and assumed that the computer really was understanding their issues.  Although the AI researchers knew better, at least some of them may have believed that this effect showed that more significant results were just around the corner.

Third, the willingness of funding authorities to continue supporting general AI research became stretched, due to the delays in producing stronger results, and due to other options for how that research funds should be allocated.  For example, the Lighthill Report (produced in the UK in 1973 by Professor James Lighthill – whose lectures in Applied Mathematics at Cambridge I enjoyed many years later) gave a damning assessment:

The report criticized the utter failure of AI to achieve its “grandiose objectives.” It concluded that nothing being done in AI couldn’t be done in other sciences. It specifically mentioned the problem of “combinatorial explosion” or “intractability”, which implied that many of AI’s most successful algorithms would grind to a halt on real world problems and were only suitable for solving “toy” versions…

The report led to the dismantling of AI research in Britain. AI research continued in only a few top universities (Edinburgh, Essex and Sussex). This “created a bow-wave effect that led to funding cuts across Europe”

There were similar changes in funding climate in the US, with changes of opinion within DARPA.

Shortly afterwards, the growth of the PC and general IT market provided attractive alternative career targets for many of the bright researchers who might previously have considered devoting themselves to AI research.

To summarise, the field suffered an understandable backlash against its over-inflated early optimism and exaggerated hype.

Nevertheless, there are grounds for believing that considerable progress has taken place over the years.  The middle chapters of the book by J Storrs Hall provides the evidence.  The Wikipedia article on “AI winter” covers (much more briefly) some of the same material:

In the late ’90s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes. Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.” Rodney Brooks adds “there’s this stupid myth out there that AI has failed, but AI is around you every second of the day.”

Technologies developed by AI researchers have achieved commercial success in a number of domains, such as machine translation, data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosis and Google’s search engine…

Many of these domains represent aspects of “narrow” AI rather than “General” AI (sometime called “AGI”).  However, they can all contribute to overall progress, with results in one field being available for use and recombination in other fields.  That’s an example of point 5 in my previous list of the different factors affecting progress in AI:

  1. Individual hardware power
  2. Compound hardware power (when many different computers are linked together, as on a network)
  3. Software algorithms
  4. Number of developers and researchers who are applying themselves to the problem
  5. The ability to take advantage of previous results (“to stand on the shoulders of giants”).

On that note, let’s turn to the fourth factor in that list.

Q3: Isn’t AI now seen as a relatively uninteresting field, with few incentives for people to enter it?

The question is: what’s going to cause bright researchers to devote sufficient time and energy to progressing AI – given that there are so many other interesting and rewarding fields of study?

Part of the answer is to point out that the potential number of people working in this field is, today, larger than ever before – simply due to the rapid increase in the number of IT-literate graduates around the world.  Globally, there are greater numbers of science and engineering graduates from universities (including China and India) than ever before.

Second, here are some particular pressing challenges and commercial opportunities, which make it likely that further research will actually take place on AI:

  • The “arms race” between spam detection systems (the parts of forms that essentially say, “prove you are a human, not a bot”) and ever-cleverer spam detection evasive systems;
  • The need for games to provide ever more realistic “AI” features for the virtual characters in these games (games players and games writers unabashedly talk about the “AI” elements in these games);
  • The opportunity for social networking sites to provide increasingly realistic virtual companions for users to interact with (including immersive social networking sites like “Second Life”);
  • The constant need to improve the user experience of interacting with complex software; arguably the complex UI is the single biggest problem area, today, facing many mobile applications;
  • The constant need to improve the interface to large search databases, so that users can more quickly find material.

Since there is big money to be made from progressing solutions in each of these areas, we can assume that companies will be making some significant investments in the associated technology.

There’s also the prospect of a “tipping point” once some initial results demonstrate the breakthrough nature of some aspects of this field.  As J Storrs Hall puts it (in the “When” chapter of his book):

Once a baby [artificial] brain does advance far enough that it has clearly surpassed the bootstrap fallacy point… it might affect AI like the Wright brothers’ [1908] Paris demonstrations of their flying machines did a century ago.  After ignoring their successful first flight for years, the scientific community finally acknowleged it.  Aviation went from a screwball hobby to the rage of the age and kept that cachet for decades.  In particular, the amount of development took off enormously.  If we can expect a faint echo of that from AI, the early, primitive general learning systems will focus research considerably and will attract a lot of new resources.

Not only are there greater numbers of people potentially working on AI now, than ever before; they each have much more powerful hardware resources available to them.  Experiments with novel algorithms that previously would have tied up expensive and scarce supercomputers can nowadays be done on inexpensive hardware that is widely available.  (And once interesting results are demonstrated on low-powered hardware, there will be increased priority of access for variants of these same ideas to be run on today’s supercomputers.)

What’s more, the feedback mechanisms of general internet connectivity (sharing of results and ideas) and open source computing (sharing of algorithms and other source code) mean that each such researcher can draw upon greater resources than before, and participate in stronger collaborative projects.  For example, people can choose to participate in the “OpenCog” open source AI project.]

Appendix: Further comments on the book “Beyond AI”

As well as making a case that progress in AI has been significant, another of the main theme of J Storrs Hall’s book “Beyond AI: Creating the conscience of the machine” is the question of whether hyperhuman AIs would be more moral than humans as well as more intelligent.

The conclusion of his argument is, yes, these new brains will probably have a higher quality of ethical behaviour than humans have generally exhibited.  The final third of his book covers that topic, in a generally convincing way: he has a compelling analysis of topics such as free-will, self-awareness, conscious introspection, and the role of ethical frameworks to avoid destructive aspects of free-riders.  However, critically, it all depends on how these great brains are set up with regard to core purpose, and there are no easy answers.

Roko Mijic will be addressing this same topic in the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?” that it being held on Saturday 23rd January.  (If you use Facebook, you can RSVP here to indicate whether you’re coming.  NB it’s entirely optional to RSVP.)

7 January 2010

Mobiles manifesting AI

Filed under: AGI, Apple, futurist, intelligence, m2020, vision — David Wood @ 12:15 am

If you get lists from 37 different mobile industry analysts of “five game-changing mobile trends for the next decade“, how many overlaps will there be?  And will the most important ideas be found in the “bell” of the aggregated curve of predictions, or instead in the tails of the curve?

Of the 37 people who took part in the “m2020” exercise conducted by Rudy De Waele, I think I was the only person to mention either of the terms “AI” (Artificial Intelligence) or “PDA” (Personal Digital Assistant), as in the first of my five predictions for the 2010’s:

  • Mobiles manifesting AI – fulfilling, at last, the vision of “personal digital assistants”

However, there were some close matches:

  • Rich Wong predicted “Smart Agents 2.0 (thank you Patty Maes) become real; the ability to deduce/impute context from blend of usage and location data”;
  • Marshall Kirkpatrick predicted “Mobile content recommendation”;
  • Carlo Longino predicted “The mobile phone will evolve into an enabler device, carrying users’ digital identities, preferences and possessions around with them”;
  • Steve O’Hear predicted “People will share more and more personal information. Both explicit e.g. photo and video uploads or status updates, and implicit data. Location sharing via GPS (in the background) is one current example of implicit information that can be shared, but others include various sensory data captured automatically via the mobile phone e.g. weather, traffic and air quality conditions, health and fitness-related data, spending habits etc. Some of this information will be shared privately and one-to-one, some anonymously and in aggregate, and some increasingly made public or shared with a user’s wider social graph. Companies will provide incentives, both at the service level or financially, in exchange for users sharing various personal data”;
  • Robert Rice predicted “Artificial Life + Intelligent Agents (holographic personalities)”.

Of course, these predictions cover a spread of different ideas.  Here’s what I had in mind for mine:

  • Our mobile electronic companions will know more and more about us, and will be able to put that information to good use to assist us better;
  • For example, these companion devices will be able to make good recommendations (e.g. mobile content, or activities) for us, suggest corrections and improvements to what we are trying to do, and generally make us smarter all-round.

The idea is similar to what former CEO of Apple, John Sculley, often talked about, during his tenure with Apple.  From a history review article about the Newton PDA:

John Sculley, Apple’s CEO, had toyed with the idea of creating a Macintosh-killer in 1986. He commissioned two high budget video mockups of a product he called Knowledge Navigator. Knowledge Navigator was going to be a tablet the size of an opened magazine, and it would have very sophisticated artificial intelligence. The machine would anticipate your needs and act on them…

Sculley was enamored with Newton, especially Newton Intelligence, which allowed the software to anticipate the behavior of the user and act on those assumptions. For example, Newton would filter an AppleLink email, hyperlink all of the names to the address book, search the email for dates and times, and ask the user if it should schedule an event.

As we now know, the Apple Newton fell seriously short of expectation.  The performance of “intelligent assistance” became something of a joke.  However, there’s nothing wrong with the concept itself.  It just turned out to be a lot harder to implement than originally imagined.  The passage of time is bringing us closer to actual useful systems.

Many of the interfaces on desktop computers already show an intelligent understanding of what the user may be trying to accomplish:

  • Search bars frequently ask, “Did you mean to search for… instead of…?” when I misspell a search clue;
  • I’ve almost stopped browsing through my list of URL bookmarks; I just type a few characters into the URL bar and the web-browser lists websites it thinks I might be trying to find – including some from my bookmarks, some pages I visit often, and some pages I’ve visited recently;
  • It’s the same for finding a book on Amazon.com – the list of “incrementally matching books” can be very useful, even after only typing part of a book’s title;
  • And it’s the same using the Google search bar – the list of “suggested search phrases” contains, surprisingly often, something I want to click on;
  • The set of items shown in “context sensitve menus” often seems a much smarter fit to my needs, nowadays, than it did when the concept was first introduced.

On mobile, search is frequently further improved by subsetting results depending on location.  As another example, typing a few characters into the home screen of the Nokia E72 smartphone results in a list of possible actions for people whose contact details match what’s been typed.

Improving the user experience with increasingly complex mobile devices, therefore, will depend not just on clearer graphical interfaces (though that will help too), but on powerful search engines that are able to draw upon contextual information about the user and his/her purpose.

Over time, it’s likely that our mobile devices will be constantly carrying out background processing of clues, making sense of visual and audio data from the environment – including processing the stream of nearby spoken conversation.  With the right algorithms, and with powerful hardware capabilities – and provided issues of security and privacy are handled in a satisfactory way – our devices will fulfill more and more of the vision of being a “personal digital assistant”.

That’s part of what I mean when I describe the 2010’s as “the decade of nanotechnology and AI”.

6 January 2010

Mobile trends for the next decade

Filed under: futurist, m2020, smartphones, vision — David Wood @ 5:04 pm

A few days ago, I received an interesting invitation from mobile strategist and innovator Rudy De Waele:

It’s the end of the decade and for many of us it has been a very actively ‘mobile’ decade, a lot of the efforts and projects of our peers have become real and succesful during this decade.

As for the start of a new decade, I’ve had this idea of asking some of the people I met during the last decade  to write down their five game-changing mobile trends for the next decade.

The format is to list your 5 trends for the next decade, in words, a sentence or a pagaraph, no links.

It was a great question – especially the requirement to stick to just five trends.  Here’s the set which, after some thought, I emailed back to Rudy:

  1. Mobiles manifesting AI – fulfilling, at last, the vision of “personal digital assistants”.
  2. Powerful, easily wearable head-mounted accessories: audio, visual, and more.
  3. Mobiles as gateways into vivid virtual reality – present-day AR is just the beginning.
  4. Mobiles monitoring personal health – the second brains of our personal networks.
  5. Mobiles as universal remote controls for life – a conductor’s baton as much as a viewing portal.

No fewer than 37 different people from throughout the mobile and IT industries contributed answers.  The entire set of answers is now available for viewing on Rudy’s m-trends.org blog and is also posted onto slideshare.net, from where you can download a PDF version.

Each of the 37 sets of answers has at least one item (usually more!) that’s a good conversation starter.  The ongoing “#m2020” dialog that Rudy has started is likely to cast a long shadow.

Some of the predictions are very encouraging – like the set from Katrin Verclas covering mobiles in social development, transformation of politics (for example, in Africa), mobile payments, mobile healthcare, and mobile environmental monitoring.  Other sets of predictions foresee difficulties and backlashes as well as progress.  Some of the destruction foreseen could be counted as “creative destruction”, as in the prediction by Alan Moore:

The communications revolution accelerates, destroying businesses that refuse to think the unthinkable.

The predictions include many “first order effects” (technologies or products that people already foresee and desire, and which are already under development), but also several interesting comments on what Tom Hume calls “second order effects“.  Tom comments:

No-one predicted the loosening of time and space that Mimi Ito has noted. Similarly, what happens to our social arrangements when every photo can be face-recognised, geolocated and individuals tracked? What happens to shops when every price can be compared? What happens to conversation when it’s all recorded, or any fact is a 5-second voice-search away from being checked?

The full effects of ever-wider usage of mobile technology are, indeed, hard to predict – especially when we bear in mind the following forecast from Carlos Domingo:

Ubiquity of mobile broadband will lead to an explosion of connected devices (à la Kindle, not just phones) and M2M services (machines to machine services, without a human behind the device). In 10 year, there will be more devices/machines connected to the mobile network than humans.

In similar vein, Nicolas Nova predicts:

Non-humans (objects, animals, places) will generate more data than humans.

Mobile handsets will very likely look quite different, at the end of the decade, than they do at the beginning.  As Marek Pawlowski forecasts:

Keyboard dimensions and screen size cease to be the primary limiting factors in handset design as new input and display technologies free designers to radically change the form factor of personal communication devices.

I’ll end by sharing one of the predictions from Jonathan MacDonald, which seems to me particularly compelling:

Convergence of physical, augmented and virtual reality: augmented and virtual reality will become an increasingly standard method for search, discovery, gaming, eyesight, healthcare, retail, entertainment and most other experiences in life. Location and other contextual functions will grow so our 2D mobile experiences become 3D and ‘real’. To such an extent that the prefixes ‘augmented’ and ‘virtual’ will eventually become redundant.

The items I’ve picked out above are just scratching the surface.  There’s much, much more to read and ponder in the entire slideshow – click over to Rudy’s blog to explore further!

Rudy De Waele

2 January 2010

Vital for futurists: hacking the earth

Filed under: books, climate change, futurist, geoengineering — David Wood @ 1:16 am

Here’s a tip, for anyone seriously interested in the big issues that will dominate discussion in the next 5-10 years.  You should become familiar (if you’re not already) with the work of Jamais Cascio.  Jamais is someone who consistently has deep, interesting, and challenging things to say about the large changes that are likely to sweep over the planet in the decades ahead.

In 2003, Jamais co-founded WorldChanging.com, a website dedicated to finding and calling attention to models, tools and ideas for building a “bright green” future. In March, 2006, he started Open the Future.

One topic that Jamais has often addressed is geoengineering – sometimes also called “climate engineering”, “planetary engineering”, or “terraforming”.  Geoengineering covers a range of large-scale projects that could, conceivably, be deployed to head-off the effects of runaway global warming.  Examples include launching large mirrors into space to reflect sunlight away from the earth, injecting sulphate particles into the stratosphere, brightening clouds or deserts to increase their reflectivity, and extracting greenhouse gases from the atmosphere.  It’s a thoroughly controversial topic.  But Jamais treads skilfully and thoughtfully through the controversies.

A collection of essays by Jamais on the topic of geoengineering is available in book format, under the title “Hacking the earth: understanding the consequences of geoengineering“.  It’s a slim volume, with just over 100 pages, but it packs lots of big thoughts.  While reading, I found myself nodding in agreement throughout the book.

At present, this book is only available from Lulu.com.  As Jamais says, the book is, for him:

an experiment in self-publishing…

… in recent weeks various friends have tried out – and given high marks to – web-based self-publishing outfits like Lulu.com… I thought I’d give this method a shot.

The material in the book is derived from articles published online at Open the Future and elsewhere.  Some of the big themes are as follows (the following bullet points are all excerpts from Jamais’ writing):

  • Feedback effects ranging from methane released from melting permafrost to carbon emissions from decaying remnants of forests devoured by pine beetles risk boosting greenhouse gases faster than natural compensation mechanisms can handle.  The accumulation of non-linear drivers can lead to “tipping point” events causing functionally irreversible changes to geophysical systems (such as massive sea-level increases).  Some of these can have feedback effects of their own, such as the elimination of ice caps reducing global albedo, thereby accelerating heating.
  • None of the bright green solutions — ultra-efficient buildings and vehicles, top-to-bottom urban redesigns, local foods, renewable energy systems, and the like — will do anything to reduce the anthropogenic greenhouse gases that have already been emitted. The best result we get is stabilizing at an already high greenhouse gas level. And because of ocean thermal inertia and other big, slow climate effects, the Earth will continue to warm for a couple of decades even after we stop all greenhouse gas emissions. Transforming our civilization into a bright green wonderland won’t be easy, and under even the most optimistic estimates will take at least a decade; by the time we finally stop putting out additional greenhouse gases, we could well have gone past a point where globally disastrous results are inevitable. In fact, given the complexity of climate feedback systems, we may already have passed such a tipping point, even if we stopped all emissions today.
  • Geoengineering, should it be tried, would not be a replacement for making the economic, social, and technological changes needed to eliminate anthropogenic greenhouse gases. It would only be a way of giving us more time to make those changes. It’s not an either-or situation; geo is a last-ditch prop for making sure that we can do what needs to be done.
  • We don’t know enough about how the various geoengineering proposals would play out to make a persuasive case for trying any of them.  There needs to be far more study before making any even moderate-scale experimental effort. This is not something to try today. The most important task for current geoengineering research is to identify the approaches that might look attractive at first, but have devastating results — we need to know what we should avoid even if desperate.
  • Like it or not, we’ve entered the era of intentional geoengineering. The people who believe that (re)terraforming is a bad idea need to be part of the discussion about specific proposals, not simply sources of blanket condemnations. We need their insights and intelligence. The best way to make that happen, the best way to make sure that any terraforming effort leads to a global benefit, not harm, is to open the process of studying and developing geotechnological tools.
  • Geoengineering presents more than just an environmental question. It also presents a geopolitical dilemma. With processes of this magnitude and degree of uncertainty, countries would inevitably argue over control, costs, and liability for mistakes. More troubling, however, is the possibility that states may decide to use geoengineering efforts and technologies as weapons. Two factors make this a danger we dismiss at our peril: the unequal impact of climate changes, and the ability of small states and even nonstate actors to attempt geoengineering.
  • It is possible that, should the international community refrain from geoengineering strategies, one or more smaller, non-hegemonic, actors could undertake geoengineering projects of their own. This could be out of a legitimate fear that prevention and mitigation strategies would be insufficient, out of a disagreement with the consensus over geoengineering safety or results, or—most troublingly—out of a desire to use geoengineering tools to achieve a relative increase in competitive power over adversaries.

I particularly liked Jamais’ suggestion of a “Reversibility Principle” as an alternative to the “Precautionary Principle” and “Proactionary Principle” that have previously been suggested as guidelines for deciding which actions to take, regarding the application of technology.

Geoengineering is, by its nature, a huge topic.  The “Technology Review” magazine contains a substantial analysis entitled “The Geoengineering Gambit” in its Jan-Feb 2010 edition. And the authors of Freakonomics, Stephen J Dubner and Steven Levitt, included a chapter on geoengineering in their follow-up book, “Superfreakonomics“.  As it happens, there seems to be wide consensus that the freakonomics team were considerably too hasty in their analysis – see for example the Guardian article “Why Superfreakonomics’ authors are wrong on geo-engineering“.  But the fact that there were mistakes in that analysis doesn’t mean the topic itself should fade from view.

Far from it: I’m sure we’re going to be hearing more and more about geoengineering.  It deserves our attention!

31 December 2009

The constant economy

Filed under: books, Economics, green, leadership, market failure, vision — David Wood @ 2:54 pm

I’ve had mixed thoughts when reading Zac Goldsmith‘s “The constant economy: how to create a stable society” over the last few days.  It makes some useful contributions to an ultra-important debate.  However, the recommendations it makes frequently strike me as impractical.

Zac has been one of the advisors to the UK Conserative Party on environmental matters.  He is now the Conservative prospective parliamentary candidate for the Richmond Park constituency, which is adjacent to the one I live in.  It’s possible that his views on environmental matters will have a significant influence over the next UK government.

Some of the examples in the book made me think, “Gosh, I didn’t realise things were so bad; things can’t be left to go on like this“.  I had these thoughts when reading, for example, about the huge decline in fishing stocks worldwide, and about the enormous swathe of plastic waste in large parts of the Pacific Ocean.

Other parts, however, made me think, “Hang on, there’s another side to this story” – for example, for some of the incidents described in the chapter about the Precautionary Principle, and for the section about nuclear power.

This book is like a manifesto.  Mixed in with real-world anecdotes and analysis, each chapter contains a list of “Voter Demand Box” items.  For example, here’s the list from the chapter on “A zero waste economy”:

‘Take back’

People should have a legal ‘take back’ right enshrined in consumer law.  This would give everyone the right to take any packaging waste back to the shop it was bought from, and impose an obligation on retailers to recycle that waste once it was received.

Paying people to recycle

No more landfill

Using the right materials

Built to last

Government buying power

Incineration, a last resort

And from the chapter “An energy revolution”:

Find out the truth about oil

A cross-party taskforce should be established immediately to draw up a risk assessment.  It should not invite the traditional fuel industry to take part, as it would effectively be studying a risk scenario that says their maths is incorrect.  The taskforce should be required to publically report its findings within a year.

At the same time, we should also expect our government to put pressure on the UN or International Energy Authority to undertake a review of the world’s oil reserves.  If the economic models of every nation on earth are based on the assumption of everlasting oil supplies, it is reasonable that they should know how much oil actually exists.

Capture the heat

Reward the pioneers

Break the rules

Invest!

We urgently need a renewable energy fund to provide substantial grants for the research and development of radical new clean energy technologies.  From wave power to clean coal technology, potential solutions remain in the pipeline due to a lack of investment.  Government should provide that investment.  Diverting money that would otherwise be spent subsidizing fossil fuels or the nuclear energy could provide billions of pounds for research, support and, crucially, for upgrading the national grid.

Stop paying the polluters

Whilst there are elements of good sense to all (or nearly all) of these recommendations, this set of items needs a lot more work:

  • The items are uncosted, and generally open-ended;
  • It’s often unclear how the recommendations differ from policies and processes that are already in place;
  • There’s no prioritisation (everything is equally important);
  • There’s no roadmap (everything is equally urgent).

Despite this weakness, this book still has merit as a good conversation starter.

The book’s introduction provides a higher-level picture.  Here’s the opening paragraph:

The world is in trouble.  As human numbers expand and the resource-hungry economy grows, the natural environment is suffering an unprecedented assault.  Forests are shrinking, species are disappearing, oceans are emptying, land is turning to desert.  The climate itself is being thrown out of balance.  In just a few generations, we have created the biggest threat to the natural world since humanity evolved.  Unless something radical is done now, the world in which our children grow up will be less beautiful, less bountiful, more polluted and more uncertain than ever before.

The top-level recommendations in the book are, in effect:

1.) The need for first-class political leadership on environmental issues

We need political leaders who can free themselves from the constraints of pressure groups, whose vision extends far beyond the next election, and who can motivate strong constructive action (rather than just words):

Politicians in Britain, as elsewhere, can see the rising tide of concern over green issues, and in many cases know what solutions are required.  The environment has never been so high on the political agenda…

Yet few politicians are prepared to take the action needed.  Nothing happens.  Time ticks by, the situation becomes more urgent – and government does nothing.  Why?

Politicians are terrified of acting because they believe that tackling the looming crisis will involve restricting the electorates choices.  They believe that saving the planet means destroying the economy, and that neither business nor voters will stand for it.  They fear the headlines of a hostile media.  They fear, ultimately, for their jobs.  It always seems easier to do nothing – and to let the situation drift and hope that someone else takes the risk…

2.) The need to adapt market economics to properly respect environmental costs

Our defining challenge is to marry the environment with the market.  In other words, we need to reform those elements of our economy that encourage us to damage, rather than nurture, the natural environment.

The great strength of the market is its unique ability to meet the economic needs of citizens.  Its weakness is that it is blind to the value of the environment…

Other than nature itself, the market is also the most powerful force for change that we have.  The challenge we fact is to find ways to price the environment into our accounting system: to do business as if the earth mattered, and to make it matter not just as a moral choice but as a commercial imperative

Note: this is hardly a new message.  For one, Jonathon Porritt covered similar ground in his 2005 book (with a new edition in 2007), “Capitalism as if the world matters“.  However, Zac has a significantly simpler writing style, so his ideas may reach a wider audience – whereas I confess I twice got bogged down in the early stages of Jonathon’s book, and set it aside without reading further.

3.) The need for better use of market-based instruments such as taxation

We need to change the boundaries within which the market functions, by using well-targeted regulation.

Taxation is the best mechanism for pricing pollution and the use of scarce resources.  If tax shifts emphasis from good things like employment to bad things like pollution, companies will necessarily begin designing waste and pollution out of the way they operate…

The other major tool in the policymakers’ kit is trading.  Carbon emissions trading is a good example of a market-based approach which attaches a value to carbon emissions and ensures that buyers and sellers are exposed to this price.  As long as the price is high enough to influence decisions, it can work…

Note: it’s clear that the existing carbon trading scheme has lots of problems (as Zac describes, later in the book).  That’s a reason to push on quickly to a more effective replacement.

There’s also a latent worry over Zac’s confident recommendation:

It’s crucial that wherever money is raised on the back of taxing ‘bad’ activities is used to subsidise desirable activities.  For example, if a new tax is imposed on the dirtiest cars, it needs to be matched, pound for pound, on reductions in the price of the cleanest cars.

The complication is that once the higher taxation drives down usage of (in this example) the dirtiest cars, the amount of tax earned by the government will be reduced, and the “pound for pound” balance will break.  It’s another example of how the ideas in the book lack detailed financial planning.  Presumably Zac intends these details to be provided at a later stage.

4.) We need a fresh approach to regulation

Direct controls force polluting industries to improve their performance, and can eliminate products or practices that are particularly hazardous…  Markets without regulation would not have delivered unleaded petrol, for instance, or catalytic converters.  Without regulations requiring smokeless fuel, London’s smogs would still be with us.

This approach, however, needs to be effective.  With some products and processes, the regulatory bar needs to be raised internationally to avoid companies chasing the lowest standards globally.  We also need a change in our regulatory approach, away from an obsessive policing of processes towards a focus on outcomes.  If the regulatory system is too prescriptive, there is no room for innovation, and no real prospect of higher environmental standards…

5.) We need to measure what matters

Almost every nation on earth uses gross domestic product (GDP) to measure its economic growth.  The trouble is, expressed as a monetary value, GDP simply measures economic transactions, indiscrimately.  It cannot tell the difference between useful transactions and damaging ones…

Chopping down a rainforest and turning it into toilet paper increases GDP.  If crime escalates, the resulting investments in prisons and private security will add to GDP and be measured as ‘growth’.  When the Exxon Valdez oil tanker ran aground and spilt its vast load of oil on the pristine Alaskan shoreline, US GDP actually soared as legal work, media coverage and clean-up costs were all added to the national accounts…

US Senator Robert Kennedy said something similar:  “GDP does not allow for the health of our children, the quality of their education, or the joy of their play”, he said.  “It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials.  It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion to our country; it measures everything, in short, except that which makes life worthwhile.”

But the pursuit of economic growth, as measured by GDP, has been the overriding policy for decades, with the effect that the consequences have often been perverse…

A number of organisations have tried to assemble a new tool for measuring progress.  But the result is invariably a toolkit that is monstrous in its complexity and too impractical for any government to use.  A neater approach would be for the government to establish a wholly independent Progress Commission, staffed by experts from a wide variety of fields: economists, environmentalists, statisticians, academics, etc…

Whichever indicators are selected, the results would be handed each year to Parliament and the media.  The government would be required to respond…

Note: again, the suggested practical follow-up seems weaker than the analysis of the problem itself.  The economy has been ultra-optimised to pursue growth in GDP.  That’s how businesses are set up.  That’s going to prove very difficult to change.  Attention to non-financial matters is very likely to be squeezed.

However, it’s surely good to have the underlying problem highlighted once again.  Robert Kennedy’s stirring words ring as clearly today, as when they were first spoken: March 1968.

Let’s keep these words in mind, until we are confident that society is set up to pursue what matters, rather than simply to boost GDP.

Further reading: The book has its own website, with a blog attached.

« Newer PostsOlder Posts »

Blog at WordPress.com.