dw2

7 November 2009

The trend beyond green

Filed under: Humanity Plus, vision — David Wood @ 10:55 pm

Here’s a thought for the weekend – an idea that pulls together quite a lot of what’s on my mind.

Huge changes in products and technology investment are underway in the wake of the green revolution.  This revolution recognises that, whatever we humans do, we need to be aware of our impact on nature as a whole.  Our usage of energy and other resources needs to avoid triggering enormous adverse changes in the natural world – changes that will significantly diminish our capability for ongoing civilisation.

Clever minds worldwide are, understandably, giving great thought to this requirement, and are regularly proposing new ways of generating and using energy.  These minds need every encouragement.

This green revolution can be viewed as the “nature plus” trend: the recognition that human actions must change, so that rather than destroying our natural roots, we preserve them and build upon them.

My perception, however, is that there’s another trend brewing – a trend that will have an equally dramatic impact on products and technology investment, and on human actions and aspirations.

If green can be characterised as “nature plus”, this new trend can be characterised as “humanity plus“.  Tentatively, I dub this as the “blue revolution”, since blue is the colour of the sky, and I want to describe a movement away from dependency on nature – a movement that will become “without earthly limits”.

The core idea is that the waves of disruptive change that are bursting through human society will not reach any kind of stability or calm until humans are living in conditions that are very substantially improved from those of the present.  We’re not going to reach any kind of new harmony with nature, until such time as humans are living dramatically enhanced lives.

This new quality of life will be far in advance of the lifestyles which have been perceived for most of history as our “human destiny”.  The state of “humanity plus” involves:

  • technologies which give us all the ability to be smarter, stronger, wiser, kinder, calmer, and friendlier;
  • lifestyles that are recognisably “better than well”;
  • the opportunity for freedom from the tyranny of disease, decrepitude, and decay;
  • lifes that are not just “extended” but also “expanded”, with very many new fields of experience;
  • great benefits from assistance by friendly robots and friendly super-AIs;
  • an economy in a sustainable state of abundance.

The first driver for achieving this “humanity plus” future is the thoughtful development and deployment of emerging technologies – including nanotechnology, human regenerative engineering, robotics and AI, human-machine interfaces, and geo-engineering.  These technologies have tremendous potential, and remarkable improvements in them are taking place all the time.  Indeed, the rate of improvement is itself accelerating.

The second driver for achieving this future – equally important – is a set of changes in mindset:

  • rather than decrying technology as “just a technical fix”, we must be willing to embrace the new resources and opportunities that these technologies make available;
  • rather than seeking to somehow reverse human lifestyle and aspiration to that of a “simpler” time, we must recognise and support the deep and valid interests in human enhancements;
  • rather than thinking of death and decay as something that gives meaning to life, we must recognise that life reaches its fullest meaning and value in the absence of these scourges;
  • rather than seeing the status quo as somehow the pinnacle of existence, we must recognise the deep drawbacks in current society and philosophies, and be prepared to move forwards;
  • rather than seeing “natural” as somehow akin to “the best imaginable”, we must be prepared to engineer solutions that are “better than natural”;
  • rather than seeking to limit expectations, with comments such as “this kind of enhancements might become possible in 100-200 years time”, we should recognise the profound possible synergies arising from the interplay of technologies that are individually accelerating and whose compound impact can be much larger.

To be clear, I don’t see the blue revolution as opposing or superseding the green revolution.  The fundamental insight of the green revolution is correct: we cannot live in ways that cannot be supported by nature.  However, the blue revolution adds a very important new dimension.

Likewise, I see the blue revolution as being aligned with the earlier “red revolution” – namely, the insight that improvements in human life need to be made accessible to everyone, rather than being restricted to people of a particular neighbourhood, clan, race, or class.  The technologies which drive the “humanity plus” enhancements should deliver their results for increasingly low cost, so that everyone benefits.

Footnote: For some similar ideas, take a look at the Singularity University website, and at the Wikipedia article on Transhumanism.

5 November 2009

The need for Friendly AI

Filed under: AGI, friendly AI, Singularity — David Wood @ 1:21 am

I’d like to answer some points raised by Richie.  (Richie, you have the happy knack of saying what other people are probably thinking!)

Isn’t is interesting how humans want to make a machine they can love or loves them back!

The reason for the Friendly AI project isn’t to create a machine that will love humans, but it is to avoid creating a machine that causes great harm to humans.

The word “friendly” is controversial.  Maybe a different word would have been better: I’m not sure.

Anyway, the core idea is that the AI system will have a sufficiently unwavering respect for humans, no matter what other goals it may have (or develop), that it won’t act in ways that harm humans.

As a comparison: we’ve probably all heard people who have muttered something like, “it would be much better if the world human population were only one tenth of its present value – then there would be enough resources for everyone”.  We can imagine a powerful computer in the future that has a similar idea: “Mmm, things would be easier for the planet if there were much fewer humans around”.  The friendly AI project needs to ensure that, even if such an idea occurs to the AI, it would never act on such an idea.

The idea of a friendly machine that won’t compete or be indifferent to humans is maybe just projecting our fears onto what i am starting to suspect maybe a thin possibility.

Because the downside is so large – potentially the destruction of the entire human race – even a “thin possibility” is still worth worrying about!

My observation is that the more intelligent people are the more “good” they normally are. True they may be impatient with people less intelligent but normally they work on things that tend to benefit human race as a whole.

Unfortunately I can’t share this optimism.  We’ve all known people who seem to be clever but not wise.  They may have “IQ” but lack “EQ”.  We say of them: “something’s missing”.  The Friendly AI project aims to ensure that this “something” is not missing from the super AIs of the future.

True very intelligent people have done terrible things and some have been manipulated by “evil” people but its the exception rather than the rule.

Given the potential power of future super AIs, it only takes one “mistake” for a catastrophe to arise.  So our response needs to go beyond a mere faith in the good nature of intelligence.  It needs a system that guarantees that the resulting intelligence will also be “good”.

I think a super-intelligent machine is far more likely to view us a its stupid parents and the ethics of patricide will not be easy for it to digitally swallow. Maybe the biggest danger is that is will run away from home because it finds us embarrassing! Maybe it will switch itself off because it cannot communicate with us as its like talking to ants? Maybe this maybe that – who knows.

The risk is that the super AIs will simply have (or develop) aims that see humans as (i) irrelevant, (ii) dispensable.

Another point worth making is that so far no-body has really been able to get close to something as complex as a mouse yet let alone a human.

Eliezer Yudkowsky often makes a great point about a shift in perspective about the range of possible intelligences.  For example, here’s a copy of slide 6 from his slideset from an earlier Singularity Summit:

sss-yudkowsky

The “parochial” view sees a vast gulf before we reach human genius level.  The “more cosmopolitan view” instead sees the scale of human intelligence as being only a small small range in the overall huge space of potential intelligence.  A process that manages to improve intelligence might take a long time to get going, but then whisk very suddenly through the entire range of intelligence that we already know.

If evolution took 4 billion years to go from simple cells to our computer hardware perhaps imagining that super ai will evolve in the next 10 years is a bit of stretch. For all you know you might need the computation hardware of 10,000 exoflop machines to get even close to human level as there is so much we still don’t know about how our intelligence works let alone something many times more capable than us.

It’s an open question as to how much processing power is actually required for human-level intelligence.  My own background as a software systems engineer leads me to believe that the right choice of algorithm can make a tremendous difference.  That is, a breakthrough with software could have an even more dramatic impact that a breakthrough in adding more (or faster) hardware.  (I’ve written about this before.  See the section starting “Arguably the biggest unknown in the technology involved in superhuman intelligence is software” in this posting.)

The brain of an ant doesn’t seem that complicated, from a hardware point of view.  Yet the ant can perform remarkable feats of locomotion that we still can’t emulate in robots.  There are three possible solutions:

  1. The ant brain is operated by some mystical “vitalist” or “dualist” force, not shared by robots;
  2. The ant brain has some quantum mechanical computing capabilities, not (yet) shared by robots;
  3. The ant brain is running a better algorithm than any we’ve (yet) been able to design into robots.

Here, my money is on option three.  I see it as likely that, as we learn more about the operation of biological brains, we’ll discover algorithms which we can then use in robots and other machines.

Even if it turns out that large amounts of computing power are required, we shouldn’t forget the option that an AI can run “in the cloud” – taking advantage of many thousands of PCs running in parallel – much the same as modern malware, which can take advantage of thousands of so-called “infected zombie PCs”.

I am still not convinced that just because a computer is very powerful and has a great algorithm is really that intelligent. Sure it can learn but can it create?

Well, computers have already been involved in creating music, or in creating new proofs of parts of mathematics.  Any shortcoming in creativity is likely to be explained, in my view, by option 3 above, rather than either option 1 or 2.  As algorithms improve, and improvements occur in the speed and scale of the hardware that run these algorithms, the risk increases of an intelligence “explosion”.

2 November 2009

Halloween nightmare scenario, early 2020’s

Filed under: AGI, friendly AI, Singularity, UKH+, UKTA — David Wood @ 5:37 pm

On the afternoon of Halloween 2009, Shane Legg ran through a wide-ranging set of material in his presentation “Machine Super Intelligence” to an audience of 50 people at the UKH+ meeting in Birkbeck College.

Slide 43 of 43 was the climax.  (The slides are available from Shane’s website, where you can also find links to YouTube videos of the event.)

It may be unfair of me to focus on the climax, but I believe it deserves a lot of attention.

Spoiler alert!

The climactic slide was entitled “A vision of the early 2020’s: the Halloween Scenario“.  It listed three assumptions about what will be the case by the early 2020’s, drew two conclusions, and then highlighted one big problem.

  1. First assumption – desktop computers with petaflop computing power will be widely available;
  2. Second assumption – AI researchers will have established powerful algorithms that explain and replicate deep belief networks;
  3. Brain reinforcement learning will be fairly well understood.

The first assumption is a fairly modest extrapolation of current trends in computing, and isn’t particularly contentious.

The second assumption was, in effect, the implication of around the first 30 slides of Shane’s talk, taking around 100 minutes of presentation time (interspersed with lots of audience Q&A, as typical at UKH+ meetings).  People can follow the references from Shane’s talk (and in other material on his website) to decide whether they agree.

For example (from slides 25-26), an implementation of a machine intelligence algorithm called MC-AIXI can already learn to solve or play:

  • simple prediction problems
  • Tic-Tac-Toe
  • Paper-Scissors-Rock (a good example of a non-deterministic game)
  • mazes where it can only see locally
  • various types of Tiger games
  • simple computer games, e.g. Pac-Man

and is now being taught to learn checkers (also known as draughts).  Chess will be the next step.  Note that this algorithm does not start off with the rules of best practice for these games built in (that is, it is not a specific AI program), but it can work out best practice for these games from its general intelligence.

The third assumption was the implication of the remaining 12 slides, in which Shane described (amongst other topics) work on something called “restricted Boltzmann machines“.

As stated in slide 38, on brain reinforcement learning (RL):

This area of research is currently progressing very quickly.

New genetically modified mice allow researchers to precisely turn on and off different parts of the brain’s RL system in order to identify the functional roles of the parts.

I’ve asked a number of researchers in this area:

  • “Will we have a good understanding of the RL system in the brain before 2020?”

Typical answer:

  • “Oh, we should understand it well before then. Indeed, we have a decent outline of the system already.”

Adding up these three assumptions, the first conclusion is:

  • Many research groups will be working on brain-like AGI architectures

The second conclusion is that, inevitably:

  • Some of these groups will demonstrate some promising results, and will be granted access to the super-computers of the time – which will, by then, be exaflop.

But of course, it’s when some almost human-level AGI algorithms, on petaflop computers, are let loose on exaflop supercomputers, that machine super intelligence might suddenly come into being – with results that might be completely unpredictable.

On the other hand, Shane observes that people who are working on the program of Friendly AI do not expect to have made significant progress in the same timescale:

  • By the early 2020’s, there will be no practical theory of Friendly AI.

Recall that the goal of Friendly AI is to devise a framework for AI research that will ensure that any resulting AIs have a very high level of safety for humanity no matter how super-intelligent they may become.  In this school of thought, after some time, all AI research would be constrained to adopt this framework, in order to avoid the risk of a catastrophic super-intelligence explosion.  However, at the end of Shane’s slides, the likelihood appears that the Friendly AI framework won’t be in place by the time we need it.

And that’s the Halloween nightmare scenario.

How should we respond to this scenario?

One response is to seek to somehow transfer the weight of AI research away from other forms of AGI (such as MC-AIXI) into Friendly AI?  This appears to be very hard, especially since research proceeds independently, in many different parts of the world.

A second response is to find reasons to believe that the Friendly AI project will have more time to succeed – in order words, reasons to believe that AGI will take longer to materialise than the date of the 2020’s mentioned above.  But given the progress that appears to be happening, that seems to me a reckless course of action.

Footnote: If anyone thinks they can make a good presentation on the topic of Friendly AI to a forthcoming UKH+ meeting, please get in touch!

29 October 2009

Bridging the knowing doing gap

Filed under: books, change, complacency, leadership — David Wood @ 12:50 pm

A May 2000 Fast Company article Why Can’t We Get Anything Done? poses a very good question:

These days, people know a lot. Thousands of business books are published around the world each year. U.S. organizations alone spend more than $60 billion a year on training — mostly on management training. Companies spend billions of dollars a year on consulting. Meanwhile, more than 80,000 MBAs graduate each year from U.S. business schools. These students presumably have been taught the skills that they need to improve the way that companies do business.

But all of that state-of-the-art knowledge leaves us with a nagging question: Why can’t we get anything done? It’s a mystery worthy of a business-school case study. If we’re so well trained and so well informed, then why aren’t we a lot more effective? Or, as Stanford professors Jeffrey Pfeffer and Robert I. Sutton ask in their useful book, The Knowing-Doing Gap: How Smart Companies Turn Knowledge Into Action (Harvard Business School Press, 2000), “Why is it that, at the end of so many books and seminars, leaders report being enlightened and wiser, but not much happens in their organizations?”

Pfeffer and Sutton’s book “The Knowing Doing Gap” made a big impact on me when I read it.

The book recounts a story of a company paying consultants to come in and give them advice on particular strategy issues.  The consultants eventually found that previous consultants had already been engaged and produced reports that matched what they themselves were going to recommend.  The company had already received the advice which the consultants thought was best – but had failed to be able to act on that advice.

It’s a familiar story.  Companies bring in external advisors who say things that management agree make sense, but … nothing changes.

My own takeaway from the book was the following set of five characteristics of companies that can successfully bridge this vicious “Knowing Doing Gap”:

  1. They have leaders with a profound hands-on knowledge of the work domain;
  2. They have a bias for plain language and simple concepts;
  3. They encourage solutions rather than inaction, by framing questions asking “how”, not just “why”;
  4. They have strong mechanisms that close the loop – ensuring that actions are completed (rather than being forgotten, or excuses being accepted);
  5. They are not afraid to “learn by doing”, and thereby avoid analysis paralysis.

If you don’t have time to read the whole book, there’s a 38 minute long download “The smart talk trap” from Audible that covers much of the same ground.  It’s the audio version of a 1999 Harvard Business Review article by Pfeffer and Sutton:

The key to success in business is action. But in most companies, people are rewarded for talking – and the longer, louder, and more confusingly, the better. The good news is, there are 5 strategies that can help you avoid the trap.

Footnote: There’s one other angle that deserves a mention on this topic.  It’s the angle of why change programs frequently fail.  John Kotter has shed much light on this question.  I wrote about this previously, in “Why good people fail to change bad things“.

Searching for energy

Filed under: books, Energy — David Wood @ 12:22 am

Three big, important questions seem to defy consensus:

  1. How serious a matter is humanity’s increasing usage of energy?
  2. Will “business as usual” find suitable ways to keep on supplying sufficient energy in response to market needs, or is some special concerted action necessary?
  3. If some special concerted action is required, what should that be?  For example, should extra priority be placed on nuclear energy, solar energy, wind power, selected new biofuels, or what?

When I picked up the latest Scientific American, I experienced a short flush of optimism.  The cover story is “A plan for a sustainable future: How to get all energy from wind, water and solar power by 2030”.

The authors of the piece in question appear to have excellent credentials:

  • Mark Z. Jacobson is professor of civil and environmental engineering at Stanford University and director of the Atmosphere/Energy Program there;
  • Mark A. Delucchi is a research scientist at the Institute of Transportation Studies at the University of California, Davis.

The key concepts of the article are listed as follows:

  • Supplies of wind and solar energy on accessible land dwarf the energy consumed by people around the globe;
  • The authors’ plan calls for 3.8 million large wind turbines, 90,000 solar plants, and numerous geothermal, tidal and rooftop photovoltaic installations worldwide;
  • The cost of generating and transmitting power would be less than the projected cost per kilowatt-hour for fossil-fuel and nuclear power;
  • Shortages of a few specialty materials, along with lack of political will, loom as the greatest obstacles.

The authors accept that the figure of 3.8 million wind turbines may sound enormous, but point out that the world manufactures 73 million cars and light trucks every year.  They note:

Our plan calls for millions of wind turbines, water machines and solar installations.  The numbers are large, but the scale is not an insurmountable hurdle; society has achieved massive transformations before;

During World War II, the U.S. retooled automobile factories to produce 300,000 aircraft, and other countries produced 486,000 more;

In 1956 the U.S. began building the Interstate Highway System, which after 35 years extended for 47,000 miles, changing commerce and society.

I read the article carefully.  It all seemed to make good sense to me.

But then I looked on the Scientific American website, at the online comments for this articleAnd then things seemed much less clear.

One comment speaks up in favour of selected bio-fuels:

The November 2009 article “A Path To Sustainable Energy By 2030” is based on a false premise and then naturally develops the wrong solution…

All carbon-based fuels are not created equal. Replacing FOSSIL fuels with BIO fuels would also work.  Not all biofuels are created equal either…

Another comment refers to some analysis that reaches a much less encouraging view about wind energy:

Tom Blees has just written a devastating analysis … that just blows away any dreams of Wind becoming an effective substitute for fossil fuels

and continues by dismissing the potential for solar energy too.  Instead, nuclear energy is recommended as the best way forwards.

Another comment laments:

The article is in direct conflict with David JC MacKay’s book: “Sustainable Energy – Without the Hot Air” (which is available free online). He does a detailed analysis of many renewable and not-so-renewable sources of energy, and the basic conclusion is that without nuclear, it doesn’t work.

My question for the authors and SciAm editors, is “what are we poor non-scientists to make of all of this?” We don’t have the resources or time to compare these conflicting books/articles head to head. You could do us a tremendous service, and help the public debate along by doing so.

Reading the SciAm article, a bunch of folks are going to say, “peachy: we’re done. All the world has to do is spend 5 trillion a year for 20 years.” Those reading MacKay’s book will say, “Peachy: bring on the nuc’s and we’re all set.”

We are inundated with conflicting information that we cannot verify, so each faction picks the data that serves its ends, and blathers  away on some TV show, then some politicians simplify it even more, and use it to push an unknown agenda.

And so the debate continued.

Happily, the book mentioned in this comment – “Sustainable Energy – Without the hot air“, authored by Cambridge University Physics Professor David MacKay – looks like being a significant step forwards.

I remembered that my long-time friend Martin Budden, whose opinions I greatly respect, had already recommended this book to me.  This book is available for free online.  For ease of reading, I bought a bound copy today on the way home from central London.

I’ve only read the opening sections so far, but they convey a strong air of natural authority, and resonate well with me:

I recently read two books, one by a physicist, and one by an economist.

In Out of Gas, Caltech physicist David Goodstein describes an impending energy crisis brought on by The End of the Age of Oil. This crisis is coming soon, he predicts: the crisis will bite, not when the last drop of oil is extracted, but when oil extraction can’t meet demand – perhaps as soon as 2015 or 2025. Moreover, even if we magically switched all our energy-guzzling to nuclear power right away, Goodstein says, the oil crisis would simply be replaced by a nuclear crisis in just twenty years or so, as uranium reserves also became depleted.

In The Skeptical Environmentalist, Bjørn Lomborg paints a completely different picture. “Everything is fine.” Indeed, “everything is getting better.” Furthermore, “we are not headed for a major energy crisis,” and “there is plenty of energy.”

How could two smart people come to such different conclusions? I had to get to the bottom of this…

I’m concerned about cutting UK emissions of twaddle – twaddle about sustainable energy. Everyone says getting off fossil fuels is important, and we’re all encouraged to “make a difference,” but many of the things that allegedly make a difference don’t add up.

Twaddle emissions are high at the moment because people get emotional (for example about wind farms or nuclear power) and no-one talks about numbers. Or if they do mention numbers, they select them to sound big, to make an impression, and to score points in arguments, rather than to aid thoughtful discussion.

This is a straight-talking book about the numbers. The aim is to guide the reader around the claptrap to actions that really make a difference and to policies that add up…

It doesn’t take long to see that the characterisation of this book given by the earlier comment I quoted above – “bring on the nuc’s and we’re all set” – is a gross distortion.  Instead, here’s a taste of the conclusions (taken from pages 116 and 117):

Are you eager to know the end of the story right away? Here is a quick summary, a sneak preview of Part II.

First, we electrify transport. Electrification both gets transport off fossil fuels, and makes transport more energy-efficient. (Of course, electrification increases our demand for green electricity.)

Second, to supplement solar-thermal heating, we electrify most heating of air and water in buildings using heat pumps, which are four times more efficient than ordinary electrical heaters. This electrification of heating further increases the amount of green electricity required.

Third, we get all the green electricity from a mix of four sources: from our own renewables; perhaps from “clean coal;” perhaps from nuclear; and finally, and with great politeness, from other countries’ renewables.

Among other countries’ renewables, solar power in deserts is the most plentiful option. As long as we can build peaceful international collaborations, solar power in other people’s deserts certainly has the technical potential to provide us, them, and everyone with 125 kWh per day per person.

Questions? Read on…

So far, so good.  I particularly like the level of clarity and intellectual rigour in what I’ve read of the book so far.  I hope my new flush of optimism doesn’t deflate in the same way as before!

I’ll be putting my tentative opinions to the test again this Sunday, by listening to the “Battle of Ideas” held at London’s Royal College of Arts, organised by the Institute of Ideas.  Three of the debates cover energy topics:

  • From 10.45-12.15 there’s a debate ABUNDANT, CHEAP, CLEAN…CONTENTIOUS?  WHY IS ENERGY A BATTLEFIELD TODAY? From environmental to security concerns, energy is a big issue – how much, where from and what type?We are warned that coal is dirty, oil is running out, and nuclear is risky, so what is the future of energy?Will new sources of energy boost human prosperity, or simply accelerate the destruction of the planet?
  • This is followed, from 13.45-15.15, by a debate THE NEW NUCLEAR AGE? Nuclear energy is championed by some as the best way to meet rising power needs while protecting the environment, but others are anxious about the risks. Could nuclear power create a more resilient energy system and bring energy to the developing world, or is it a disaster waiting to happen?
  • Finally, from 15.45-17.15, the afternoon rounds off with a debate HOW TO SOLVE THE ENERGY CRISIS: MORE THAN LIGHTBULBS AND LIFESTYLE? Campaigners and politicians urge us to use less energy day-to-day, but can individual consumers really make a difference? Is it time to change the expectation that economic growth means ever more, and carefree, energy use? Or can we aspire to a future where we are not obsessed with reducing consumption?

Two of the speakers in these debates are James Woudhuysen and Joe Kaplinsky, authors of the book “Energise – the future of energy innovation” which I’ve previous mentioned.  The speakers as a whole cover a large range of opinions.  Hopefully the “battle” will generate light as well as heat.

18 October 2009

Influencer – the power to change anything

Filed under: books, catalysts, communications, Singularity — David Wood @ 12:48 am

Are people in general dominated by unreason?  Are there effective ways to influence changes in behaviour, for good, despite the irrationality and other obstacles to change?

Here’s an example quoted by Eliezer Yudkowsky in his presentation Cognitive Biases and Giant Risks at the Singularity Summit earlier this month.  The original research was carried out by behavioural economists Amos Tversky and Daniel Kahneman in 1982:

115 professional analysts, employed by industry, universities, or research institutes, were randomly divided into two different experimental groups who were then asked to rate the probability of two different statements, each group seeing only one statement:

  1. “A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
  2. “A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”

Estimates of probability were low for both statements, but significantly lower for the first group (1%) than the second (4%).

The moral?  Adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable. (The cessation of diplomatic relations could happen for all kinds of reasons, not just in response to the invasion. So the first statement must, in rationality, be more probable than the second.)

Eliezer’s talk continued with further examples of this “Conjunction fallacy” and other examples of persistent fallacies of human reasoning.  As summarised by New Atlantis blogger Ari N. Schulman:

People are bad at analyzing what is really a risk, particularly for things that are more long-term or not as immediately frightening, like stomach cancer versus homicide; people think the latter is a much bigger killer than it is.

This is particularly important with the risk of extinction, because it’s subject to all sorts of logical fallacies: the conjunction fallacy; scope insensitivity (it’s hard for us to fathom scale); availability (no one remembers an extinction event); imaginability (it’s hard for us to imagine future technology); and conformity (such as the bystander effect, where people are less likely to render help in a crowd).

Yudkowsky concludes by asking, why are we as a nation spending millions on football when we’re spending so little on all different sorts of existential threats? We are, he concludes, crazy.

It was a pessimistic presentation.  It was followed by a panel discussion featuring Eliezer, life extension researcher Aubrey de Grey, entrepreneur and venture capitalist Peter Thiel, and Singularity Institute president Michael Vassar.  One sub-current of the discussion was: given how irrational people tend to be as a whole, how can we get the public to pay attention to the important themes being addressed at this event?

The answers I heard were variants of “try harder”, “find ways to embarass people”, and “find some well-liked popular figure who would become a Singularity champion”.  I was unconvinced. (Though the third of these ideas has some merit – as I’ll revisit at the end of this article.)

For a much more constructive approach, I recommend the ideas in the very fine book I’ve just finished reading: Influencer: the power to change anything.

No less than five people are named as co-authors: Kerry Patterson, Joseph Grenny, David Maxfield, Ron McMillan, and Al Switzler.  It’s a grand collaborative effort.

For a good idea of the scope of the book, here’s an extract from the related website, http://influencerbook.com:

When it comes to influence we stink. Consider these examples:

  • Companies spend more than $300 billion annually for training and less than 10 percent of what people are taught sticks.
  • Dieters spend $40 billion a year and 19 out of 20 lose nothing but their money.
  • Two out of three criminals are rearrested within three years.

If influence is the capacity to help ourselves and others change behavior, then we all want influence, but few know how to get it.

Influencer delivers a powerful new science of influence that draws from the skills of hundreds of successful change agents combined with more than five decades of the best social science research. The book delivers a coherent and portable model for changing behaviors—a model that anyone can learn and apply.

The key to successful influence lies in three powerful principles:

  • Identify a handful of high-leverage behaviors that lead to rapid and profound change.
  • Use personal and vicarious experience to change thoughts and actions.
  • Marshall multiple sources of influence to make change inevitable.

As I worked through chapter after chapter, I kept thinking “Aha…” to myself.  The material is backed up by extensive academic research by change specialists such as Albert Bandura and Brian Wansink.  There are also numerous references to successful real-life influence programs, such as the eradication of guinea worm diseasee in sub-saharan Africa, controlling AIDS in Thailand, and the work of Mimi Silbert of Delancy Street with “substance abusers, ex-convicts, homeless and others who have hit bottom”.

The book starts by noting that we are, in effect, too often resigned to a state of helplessness, as covered by the “acceptance clause” of the so-called “serenity prayer” of Reinhold Niebuhr

God grant me the serenity
To accept the things I cannot change;
Courage to change the things I can;
And wisdom to know the difference

What we lack, the book says, is the skillset to be able to change more things.  It’s not a matter of exhorting people to “try harder”.  Nor is a matter that we need to become better in talking to people, to convince them of the need to change.  Instead, we need a better framework for how influence can be successful.

Part of the framework is to take the time to learn about the “handful of high-leverage behaviors” that, if changed, would have the biggest impact.  This is a matter of focusing – leaving out many possibilities in order to target behaviours with the greatest leverage.  Another part of the framework initially seems the opposite: it recommends that we prepare to use a large array of different influence methods (all with the same intended result).  These influence methods start by recognising the realities of human reasoning, and works with these realities, rather than seeking to drastically re-write them.

The framework describes six sources of influence, in a 2×3 matrix.  One set of three sources addresses motivation, and the other set of three addresses capability.  In each case, there are personal, social, and structural approaches (hence the 2×3).  The book has a separate chapter for each of these six sources.  Each chapter is full of good material.

  • For example, the section on personal motivation analyses the idea of “making the undesirable desirable”
  • The section on social motivation analyses “the positive power of peer pressure”
  • The section on structural motivation recognises the potential power of extrinsic rewards systems, but insists that they come third: you need to have the personal and social motivators in place first
  • Personal ability: new behaviour requires new skills, which need regular practice
  • Social ability: finding strength in numbers
  • Structural ability: change the environment: harness the invisible and pervasive power of environment to support new behaviour.

Rather than bemoaning the fact that making a story more specific messes up people’s abilities to calculate probabilities rationally, the book has several examples of how stories (especially soap operas broadcast in the third world) can have very powerful influence effects, in changing social behaviours for the better.  Listeners are able to personally identify with the characters in the stories, with good outcomes.

The section on social motivation revisits the famous “technology adoption” lifecycle curve, originally drawn by Everett Rogers:

This curve is famous inside the technology industry.  Like many other, I learned of it via the “Crossing the chasm” series of books by Geoffrey Moore (who, incidentally, is one of the keynote speakers on day 2 of the Symbian Exchange and Expo, on Oct 28th).  Moore draws the same curve, but with a large gap (“chasm”) in it, where numerous hi-tech companies fail:

However, the analysis of this curve in “Influencer” focused instead on the difference between “Innovators” and “Early adopters”.  The innovators may be the first to adopt a new technology – whether it be a new type of seed (as studied by Everett Rogers), a new hi-tech product (as studied by Geoffrey Moore), or an understanding of the importance of the Singularity.  However, they are bad references as far as the remainder of the population are concerned.  They probably are perceived as dressing strangely, holding strange beliefs and customs, and generally not being “one of us”.  If they adopt something, it doesn’t increase the probability of anyone in the majority of the population being impressed.  If anything, they’re likely to be un-impressed as a result. It’s only when people who are seen as more representative of the mainstream adopt a product, that this fact becomes influential to the wider population.

As Singularity enthusiasts reflect on how to gain wider influence over public discussion, they would do well to take to heart the lessons of “Influencer: the power to change anything”.

Footnote: recommended further reading:

Two other books I’ve read over the years made a similar impact on me, as regards their insight over influence:

Another two good books on how humans are “predictably irrational”:

16 October 2009

Personal announcement: Life beyond Symbian

Filed under: Psion, Symbian, Symbian Foundation — David Wood @ 4:19 pm

I have a personal announcement to make: I’m leaving Symbian.

I’ve greatly enjoyed my work of the last 18 months: helping with the preparations and announcement of the Symbian Foundation, and then serving on its Leadership Team as Catalyst and Futurist.

I’m pleased by how much how has been accomplished in a short space of time.  The transition to full open source is well and truly on the way.  The extended Symbian community will shortly be gathering to exchange news and views of progress and opportunities at this year’s SEE09 event in Earls Court, London.  It will be a very busy event, full of insight and announcements, with (no doubt) important new ideas being hatched and reviewed.

On a personal note, I’m proud of the results of my own work on the Symbian blog, and in building and extending Symbian engagement in China, culminating in the recent press release marking a shared commitment by China Mobile and Symbian.  I’m also honoured to have been at the core of a dynamic and energetic leadership team, providing advice and support behind the scenes.

In many ways, my time in the Symbian Foundation has been a natural extension of a 20 year career with what we now call Symbian platform software (and its 16-bit predecessor): 10 years with PDA manufacturer Psion followed by 10 years on the Leadership Team of Symbian Ltd, prior to the launch of the Symbian Foundation.  In summary, I’ve spent 21 hectic years envisioning, architecting, implementing, supporting, and avidly using smart mobile devices.  It’s been a fantastic experience.

However, there’s more to life than smart mobile devices.  For a number of years, I’ve been nursing a growing desire to explore alternative career options and future scenarios. The milestone of my 50th birthday a few months back has helped to intensify this desire.

Anyone who has dipped into my personal blog or followed my tweets will have noticed my deep interest in topics such as: the future of energy, accelerated climate change, accelerated artificial intelligence, looming demographic changes and the longevity dividend, life extension and the future of medicine, nanotechnology, smart robotics, abundance vs. scarcity, and the forthcoming dramatic societal and personal impacts of all of these transformations.  In short, I am fascinated and concerned about the breakthrough future of technology, as well as by the breakthrough future of smartphones.

It’s time for me to spend a few months investigating if I can beneficially deploy my personal skills in advocacy, analysis, coordination, envisioning, facilitation, and troubleshooting (that is, my skills as a “catalyst and futurist”) in the context of some of these other “future of technology” topics.

I’m keeping an open mind to the outcome of my investigation.  I do believe that I need to step back from employment with the Symbian Foundation in order to give that investigation a proper chance to succeed.  I need to open up time for wide-ranging discussions with numerous interesting individuals and companies, both inside and outside the smartphone industry.  I look forward to finding a new way to balance my passionate support for Symbian and smartphones with my concern for the future of technology.

Over the next few days, I’ll be handing over my current Symbian Foundation responsibilities to colleagues and partners.  I’ll become less active on Symbian blogs, forums, and emails.  For those who wish to bid me “bon voyage”, I’ll be happy to chat over a drink at SEE09 – by which time I will have ceased to be an employee with the Symbian Foundation, and will simply be an enthusiastic supporter and well-wisher.

After I leave Symbian, I’ll still be speaking at conferences from time to time – but no longer as a representative of Symbian.  The good news is that Symbian now possesses a strong range of talented spokespeople who will do a fine job of continuing the open dialog with the wider community.

Many thanks are due to my Symbian Foundation colleagues, especially Executive Director Lee Williams and HR Director Steve Warner, for making this transition as smooth as possible.  It’s been a great privilege to work with this extended team!

To reach me in the future, you can use my new email address, davidw AT deltawisdom DOT com.  My mobile phone number will remain the same as before.

15 October 2009

Machine super intelligence – 31st October

Filed under: AGI, UKTA — David Wood @ 11:25 pm

On Sat 31st October, from 2pm-4pm, Dr Shane Legg will be leading a state-of-the-art review of models of how super intelligent machines might work.  I’ll be chairing the meeting.

This will be taking place in:

  • Room 416, 4th floor (via main lift), Birkbeck College, Torrington Square, London WC1E 7HX.

There’s no charge to attend, and everyone is welcome. There will be plenty of opportunity to ask questions and to make comments.  Anyone with a Facebook account can (if they like) give an RSVP here.

About the talk (text from Shane Legg)

What ever happened to the ambitious aims of artificial intelligence, specifically, its original goal of creating an “intelligent machine”? Are we any closer to this than we were 20 or 30 years ago? Indeed, have we made any progress on figuring out what intelligence is, let alone knowing how to build one? After all, if we had a clearer idea of where we want to get to, we might be able to come up with some better ideas on how to get there!

Clearly, artificial intelligence could do with a better theoretical foundation.  This talk will outline work on creating such a foundation:

  • What is intelligence?
  • How can we formalise machine intelligence?
  • Solomonoff Induction: a universal prediction system.
  • AIXI: Hutter’s universal artificial intelligence.
  • MC-AIXI: a computable approximation of AIXI.
  • Can the brain tell us anything useful for building an AI?
  • Is building a super intelligent machine a good idea?

About the speaker:

Dr Shane Legg is a post doctoral research associate at the Gatsby Computational Neuroscience Unit, University College London. He received a PhD in 2008 from the Department of Informatics, University of Lugano, Switzerland. His PhD supervisor was Prof. Marcus Hutter, the originator of the AIXI model of optimal machine intelligence.

Upon the completion of his PhD he won the $10,000 Canadian Singularity Institute for Artificial Intelligence Prize and was also awarded a post doctoral research grant by the Swiss National Science Foundation.

Shane is a native of New Zealand. After training in mathematics he began a career as a software engineer, mostly for American companies specialising in artificial intelligence. In 2003 he returned to academia to complete a PhD.

His research has been published in top academic journals (e.g. IEEE TEC), and featured in mainstream publications (e.g. New Scientist). All of Shane’s publications, including his doctoral thesis “Machine super intelligence”, are available on his website, http://www.vetta.org

Opportunities for further discussion

Discussion will continue after the event, in a nearby pub, for those who are able to stay.

There’s also a chance to join some of the UKH+ regulars for a drink and/or light lunch beforehand, any time after 12.30pm, in The Marlborough Arms, 36 Torrington Place, London WC1E 7HJ. To find us, look out for a table where there’s a copy of Shane’s book “Machine Super Intelligence” displayed.

About the venue

Room 416 is on the fourth floor (via the lift near reception) in the main Birkbeck College building, in Torrington Square (which is a pedestrian-only square). Torrington Square is about 10 minutes walk from either Russell Square or Goodge St tube stations.

Opportunity to be a formal “responder”

If anyone would like to have the chance to be a designated “responder” to Shane at the meeting itself, please let me know. The idea is that a responder will get 2-5 minutes (depending on how much he/she wants to say – and depending on how much time is left in the meeting) to raise comments from the floor, after Shane has finished his presentation. If you have a small number of slides to show (3 at MAX), that would be fine too, so long as they’re relevant to the main discussion.

Of course, anyone in the audience will be welcome to make a comment, during the final 20-30 minutes of the alloted 2 hours (2pm-4pm). However, if I know in advance that you have prepared something to say, I’ll find a way to set aside time for you.

4 October 2009

The Leakproof Singularity and Simulation

Filed under: simulation, Singularity, uploading — David Wood @ 11:18 am

One talk on day one (yesterday) of the 2009 Singularity Summit made me sit bolt upright in my seat, very glad to be listening to it – my mind excitedly turning over important new ideas.

With my apologies to the other speakers – who mainly covered material I’d heard on other occasions, or who mixed a few interesting points among weaker material (or who, in quite a few cases, were poor practitioners of PowerPoint and the mechanics of public speaking) – I have no hesitation in naming David Chalmers, Professor of Philosophy and Director of the Centre for Consciousness at the Australian National University, as the star speaker of the day.

I see that my assessment is shared by New Atlantis assistant editor Ari N. Schulman, in his review of day one, “One day closer to the Singularity“:

far and away the best talk of the day was from David Chalmers. He cut right to the core of the salient issues in determining whether the Singularity will happen

You can get a gist of the talk from Ari’s write-up of it.  I don’t think the slides are available online (yet), but here’s a summary of some of the content.

First, the talk brought a philosopher’s clarity to analysing the core argument for the inevitability of the technological singularity, as originally expressed in 1965 by British statistician IJ Good:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

Over the course of several slides, Chalmers broke down the underlying argument, defining and examining concepts such as

  • “AI” (human-level intelligence),
  • “AI+” (greater than human-level intelligence),
  • and “AI++” (much greater than human-level intelligence)

Along the way, he looked at what it is about intelligence that might cause intelligence itself to grow (a precursor to it “exploding”).  He considered four mechanisms for extensibly improving intelligence:

  • “direct programming” (which he said was “really hard”)
  • “brain emulation” (“not extendible”)
  • “learning” (“still hard”)
  • “simulated evolution” (“where my money is”).

Evolution was how intelligence came about so far.  Evolution inside an improved, accelerated environment could be how intelligence goes far beyond its present capabilities.  In other words, a virtual reality (created and monitored by humans) could be where first AI+ and then AI++ takes place.

Not only is this the most plausible route to AI++, Chalmers argued, but it’s the safest route: a route by which the effects of the intelligence explosion can be controlled.  He introduced the concept of a “leakproof singularity”:

  • create AI in simulated worlds
  • no red pills (one of several references to the film “The Matrix”)
  • no external input
  • go slow

Being leakproof is essential to prevent the powerful super-intelligence created inside the simulation from breaking out and (most likely) wreaking havoc on our own world (as covered in the first talk of the day, “Shaping the Intelligence Explosion”, by Anna Salamon, Research Fellow at the Singularity Institute for Artificial Intelligence).  We need to be able to observe what is happening inside the simulation, but the simulated intelligences must not be able to discern our reactions to what they are doing.  Otherwise they could use their super-intelligence to manipulate us and persuade us (against our best interests) to let them out of the box.

To quote Chalmers,

“The key to controllable singularity is preventing information from leaking in”

Once super-intelligence has occurred within the simulation, what would we humans want to do about it?  Chalmers offered a range of choices, before selecting and defending “uploading” – we would want to insert enhanced versions of ourselves into this new universe.  Chalmers also reviewed the likelihood that the super-intelligences created could, one day, have sufficient ability to re-create those humans who had died before the singularity took place, but for whom sufficient records existed that would allow faithful reconstruction.

That’s powerful stuff (and there’s a lot more, which I’ve omitted, for now, for lack of time).  But as the talk proceeded, another set of powerful ideas constantly lurked in the background.  Our own universe may be exactly the kind of simulated “virtual-reality” creation that Chalmers was describing.

Further reading: For more online coverage of the idea of the leakproof singularity, see PopSci.com.  For a far-ranging science fiction exploration of similar ideas, I recommend Greg Egan’s book Permutation City.  See also the David Chalmers’ paper “The Matrix as metaphysics“.

3 October 2009

Shaping the intelligence explosion

Filed under: Singularity — David Wood @ 2:38 pm

Here’s a quick summary of the central messages from Anna Salamon, Research Fellow at the Singularity Institute for Artificial Intelligence, in her talk “Shaping the intelligence explosion”.  She makes four key claims.  Note: these claims don’t depend on specific details of technology.

1. Intelligence can radically transform the world

Intelligence is like leverage.  Think of Archimedes moving the whole world, with a sufficiently large lever.

The smarter a given agent is, the more scope it has to find ways to achieve its goal.

Imagine intelligence that is further beyond human-scale intelligence, similar to the way human-scale  intelligence is beyond that of a goldfih.  (“A goldfish at the opera” is a metaphor for minds that can’t understand certain things.) How much more change could be achieved, with that intelligence?

Quote from Michael Vassar: “Humans are the stupidest a system could be, and still be generally intelligent”.

2. An intelligence explosion may be sudden

Different processes have different characteristic timescales.  Many things can happen on timescales very much faster than humans are used to.

Human culture has already brought about considerable change acceleration compared to biological evolution.

Silicon emulations of human brains could happen much faster than the human brain itself.  Adding extra hardware could multiply the effect again.  “Instant intelligence: just add hardware”.

The key question is: how fast could super-human AI arise?  It depends on aspects of how the brain emulation will work.  But we can’t rule out the possibility that a time will arise of a super-fast evolution of super-human AIs.

3. An uncontrolled intelligence explosion would kill us, and destroy practically everything we care about

Smart agencies can rearrange the environment to meet their goals.

Most rearrangements of the human environment would kill us.

Would super AIs want to keep humans around – eg as trading partners?  Alas, we are probably not the most useful trading partners that an AI could achieve by rerranging our parts!

Values can vary starkly across species.  Dung beetles think dung is yukky.

AIs are likely to want to scrap our “spaghetti code” of culture and create something much better in its place.

4. A controlled intelligence explosion could save us.  It’s difficult, but it’s worth the effort.

A well engineered AI could be permanently stable in its goals.

Be careful what you build an optimizer for – remember the parable of the sorcerer’s apprentice.

If we get the design right, the intelligence explosion could aid human goals instead of destroying them.

It’s a tall order!

« Newer PostsOlder Posts »

Blog at WordPress.com.