8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

19 March 2011

A singularly fine singularitarian panel?

Filed under: futurist, Humanity Plus, Kurzweil, Singularity — David Wood @ 12:37 pm

In a moment, I’ll get to the topic of a panel discussion on the Singularity – a panel I’ve dubbed (for reasons which should become clear) “Post Transcendent Man“. It’s a great bunch of speakers, and I’m expecting an intellectual and emotional mindfest.  But first, some background.

In the relatively near future, I expect increasing numbers of people to navigate the sea change described recently by writer Philippe Verdoux in his article Transhumanists coming out of the closet:

It wasn’t that long ago that listing transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on, as interests on one’s CV might result in a bit of embarrassment.

Over just the past decade and a half, though, there seems to have been a sea change in how these issues are perceived by philosophers and others: many now see them as legitimate subjects of research; they have, indeed, acquired a kind of academic respectability that they didn’t previously possess.

There are no doubt many factors behind this shift. For one, it seems to be increasingly apparent, in 2011, that technology and biology are coming together to form a new kind of cybernetic unity, and furthermore that such technologies can be used to positively enhance (rather than merely alter) features of our minds and bodies.

In other words, the claim that humans can “transcend” (a word I don’t much like, by the way) our biological limitations through the use of enhancement technologies seems to be increasingly plausible – that is, empirically speaking.

Thus, it seems to be a truism about our contemporary world that technology will, in the relatively near future, enable us to alter ourselves in rather significant ways. This is one reason, I believe, that more philosophers are taking transhumanism seriously…

On a personal note, when I first discovered transhumanism, I was extremely skeptical about its claims (which, by the way, I think every good scientific thinker should be). I take it that transhumanism makes two claims in particular, the first “descriptive” and the second “normative”: (i) that future technologies will make it possible for us to radically transform the human organism, potentially enabling us to create a new species of technologized “posthumans”; and (ii) that such a future scenario is preferable to all other possible scenarios. In a phrase: we not only can but ought to pursue a future marked by posthumanity…

One factor that leads people to pay more serious attention to this bundle of ideas – transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on – is the increasing coverage of these ideas in thoughtful articles in the mainstream media.  In turn, many of these articles have been triggered by the film Transcendent Man by director Barry Ptolemy, featuring the groundbreaking but controversial ideas and projects of inventor and futurist Ray Kurzweil.  Here’s a trailer for the film:

The film has received interesting commentary in, among other places:

I had mixed views when watching the movie myself:

  • On the one hand, it contains a large number of profound sound bites – statements made by many of the talking heads on screen; any of these sound bites could, potentially, change someone’s life, if they reflect on the implications;
  • The film also covers many details of Kurzweil’s own biography, with archive footage of him at different stages of his career – this filled in many gaps in my own understanding, and gave me renewed respect for what he has accomplished as a professional;
  • On the other hand, although there are plenty of critical comments among the sound bites – comments highlighting potential problems or issues with Kurzweil’s ideas – the film never really lets the debate fly;
  • I found myself thinking – yes, that’s an interesting and important point, now let’s explore this further – but then the movie switched to a different frame.

The movie has its official UK premier at the London Science Museum on Tuesday 5th April.  Kurzweil himself will be in attendance, to answer questions raised by the audience.  The last time I checked, tickets were sold out.

Post Transcendent Man

To drill down more deeply into the potentially radical implications of Kurzweil’s ideas and projects, the UK chapter of Humanity+ has arranged an event in  Birkbeck College (WC1E 7HX), Torrington Square in Central London on the afternoon (2pm-4.15pm) of Saturday 9th April.  We’ll be in Malet Street lecture room B34 – which seats a capacity audience of 177 people.  For more details about logistics, registration, and so on, see the official event website, or the associated Facebook page.

The event is privileged to feature an outstanding set of speakers and panellists who represent a range of viewpoints about the Singularity, transhumanism, and human transcendence.  In alphabetical order by first name:

Dr Anders Sandberg is a James Martin research fellow at the Future of Humanity Institute at Oxford University. As a part of the Oxford Martin School he is involved in interdisciplinary research on cognitive enhancement, neurotechnology, global catastrophic risks, emerging technologies and applied rationality. He has been writing about and debating transhumanism, future studies, neuroethics and related questions for a long time. He is also an associate of the Oxford Centre for Neuroethics and the Uehiro Centre for Practical Ethics, as well as co-founder of the Swedish think tank Eudoxa.

Jaan Tallinn is one of the programmers behind Kazaa and a founding engineer of Skype. He is also a partner in Ambient Sound Investments as well as a member of the Estonian President’s Academic Advisory Board. He describes himself as singularitarian/hacker/investor/physicist (in that order). In recent years Jaan has found himself closely following and occasionally supporting the work that SIAI and FHI are doing. He agrees with Kurzweil in that the topic of Singularity can be extremely counterintuitive to general public, and has tried to address this problem in a few public presentations at various venues.

Nic Brisbourne is a partner at venture capital fund DFJ Esprit and blogger on technology and startup issues at The Equity Kicker. As such he’s interested in when technology and science projects become products and businesses. He has a personal interest in Kurzweil’s ideas and longevity in particular and he says he’s keen to cross the gap from personal to professional and find exciting startups generating products in this area, although he thinks that the bulk of the commercialisation opportunities are still a year or two out.

Paul Graham Raven is a writer, literary critic and bootstrap big-picture futurist; he prods regularly at the fuzzy boundary of the unevenly-distributed future at futurismic.com. He is Editor-in-Chief and Publisher of The Dreaded Press, a rock music reviews webzine, and Publicist and PR officer for PS Publishing – perhaps the UK’s foremost boutique genre publisher. He says he’s also a freelance web-dev to the publishing industry, a cack-handed fuzz-rock guitarist, and in need of a proper haircut.

Russell Buckley is a leading practitioner, speaker and thinker about mobile and mobile marketing. MobHappy, his blog about mobile technology, is one of the most established focusing on this area. He is also a previous Global Chairman of the Mobile Marketing Association, a founder of Mobile Monday in Germany and holds numerous non-executive positions in mobile technology companies. Russell learned about mobile advertising startup, AdMob, soon after its launch, and joined as its first employee in 2006, with the remit of launching AdMob into the EMEA market. Four years later, AdMob was sold to Google for $750m. By night though, Russell is fascinated by the socio-political implications of technology and recently graduated from the Executive Program at the Singularity University, founded by Ray Kurtzweil and Peter Diamandis to “educate and inspire leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges”.

The discussion continues

The event will start, at 2pm, with the panellists introducing themselves, and their core thinking about the topics under discussion.  As chair, I’ll ask a few questions, and then we’ll open up for questions and comments from the audience.  I’ll be particularly interested to explore:

  • How people see the ideas of accelerating technology making a difference in their own lives – both personally or professionally.  Three of us on the stage were on founding teams of companies that made sizeable waves in the technology world (Jaan Tallinn, Skype; Russell Buckley, AdMob; myself, Symbian).  Where do we see rapidly evolving technology (as often covered by Kurzweil) taking us next?
  • People’s own experiences with bodies such as the Singularity University, the Singularity Institute, and the Future of Humanity Institute at Oxford University.  Are these bodies just talking shops?  Are they grounded in reality?  Are they making a substantial positive difference in how humanity responds to the issues and challenges of technology?
  • Views as to the best way to communicate ideas like the Singularity – favourite films, science fiction, music, and other media.  How does the move “Transcendent Man” compare?
  • Reservations and worries (if any) about the Singularity movement and the ways in which Kurzweil expresses his ideas.  Are the parallels with apocalyptic religions too close for comfort?
  • Individuals’ hopes and aspirations for the future of technology.  What role do they personally envision playing in the years ahead?  And what timescales do they see as credible?
  • Calls to action – what (if anything) should members of the audience change about their lives, in the light of analysing technology trends?

Which questions do you think are the most important to raise?

Request for help

If you think this is an important event, I have a couple of suggestions for you:

The discussion continues (more)

Dean Bubley, founder of Disruptive Analysis and a colleague of mine from the mobile telecomms industry, has organised the “Inaugural UK Humanity+ Evening Salon” on Wednesday April 13th, from 7pm to 10pm.  Dean describes it as follows:

Interested in an evening discussing the future of the human species & society? Aided by a drink or two?

This is the first “salon” event for the London branch of “Humanity Plus”, or H+ for short. It’s going to be an informal evening event involving a stimulating guest speaker, Q&A and lively discussion, all aided by a couple of drinks. It fits alongside UKH+’s larger Saturday afternoon lecture sessions, and occasional all-day major conferences…

It will be held in central London, in a venue TBC closer to the time. Please contact Dean Bubley (facebook.com/bubley), the convener & moderator, for more details.

For more details, see the corresponding Facebook page, and RSVP there so that Dean has an idea of the likely numbers.

15 March 2010

Imagining a world without money

Filed under: Economics, futurist, motivation, politics, Singularity, vision, Zeitgeist — David Wood @ 11:48 am

On Saturday, I attended “London Z Day 2010” – described as

presentations about futurism and technology, the singularity and the current economic landscape, activism and how to get involved…

Around 300 people were present in the Oliver Thompson Lecture Theatre of London’s City University.  That’s testimony to good work by the organisers – the UK chapter of the worldwide “Zeitgeist Movement“.

I liked a lot of what I heard – a vision that advocates greater adoption of:

  • Automation: “Using technology to automate repetitive and tedious tasks leads to efficiency and productivity. It is also socially responsible as people are freed from labor that undermines their intelligence”
  • Artificial intelligence: “machines can take into account more information”
  • The scientific method: “a proven method that has stood the test of time and leads to discovery. Scientific method involves testing, getting feedback from natural world and physical law, evaluation of results, sharing data openly and requirement to replicate the test results”
  • Technological unification: “Monitoring planetary resources is needed in order to create an efficient system, and thus technology should be shared globally”.

I also liked the sense of urgency and activism, to move swiftly from the current unsustainable social and economic frameworks, into a more rational framework.  Frequent references of work of radical futurists like Ray Kurzweil emphasised the plausibility of rapid change, driven by accelerating technological innovation.  That makes good sense.

I was less convinced by other parts of the Zeitgeist worldview – in particular, its strong “no money” and “no property” messages.

Could a society operate without money?  Speakers from the floor seemed to think that, in a rationally organised society, everyone would be able to freely access all the goods and services they need, rather than having to pay for them.  The earth has plenty of resources, and we just need to look after them in a sensible way.  Money has lots of drawbacks, so we should do without it – so the argument went.

One of the arguments made by a speaker, against a monetary basis of society, was the analysis from the recent book “The Spirit Level: Why More Equal Societies Almost Always Do Better” by Richard Wilkinson and Kate Pickett.  Here’s an excerpt of a review of this book from the Guardian:

We are rich enough. Economic growth has done as much as it can to improve material conditions in the developed countries, and in some cases appears to be damaging health. If Britain were instead to concentrate on making its citizens’ incomes as equal as those of people in Japan and Scandinavia, we could each have seven extra weeks’ holiday a year, we would be thinner, we would each live a year or so longer, and we’d trust each other more.

Epidemiologists Richard Wilkinson and Kate Pickett don’t soft-soap their message. It is brave to write a book arguing that economies should stop growing when millions of jobs are being lost, though they may be pushing at an open door in public consciousness. We know there is something wrong, and this book goes a long way towards explaining what and why.

The authors point out that the life-diminishing results of valuing growth above equality in rich societies can be seen all around us. Inequality causes shorter, unhealthier and unhappier lives; it increases the rate of teenage pregnancy, violence, obesity, imprisonment and addiction; it destroys relationships between individuals born in the same society but into different classes; and its function as a driver of consumption depletes the planet’s resources.

Wilkinson, a public health researcher of 30 years’ standing, has written numerous books and articles on the physical and mental effects of social differentiation. He and Pickett have compiled information from around 200 different sets of data, using reputable sources such as the United Nations, the World Bank, the World Health Organisation and the US Census, to form a bank of evidence against inequality that is impossible to deny.

They use the information to create a series of scatter-graphs whose patterns look nearly identical, yet which document the prevalence of a vast range of social ills. On almost every index of quality of life, or wellness, or deprivation, there is a gradient showing a strong correlation between a country’s level of economic inequality and its social outcomes. Almost always, Japan and the Scandinavian countries are at the favourable “low” end, and almost always, the UK, the US and Portugal are at the unfavourable “high” end, with Canada, Australasia and continental European countries in between.

This has nothing to do with total wealth or even the average per-capita income. America is one of the world’s richest nations, with among the highest figures for income per person, but has the lowest longevity of the developed nations, and a level of violence – murder, in particular – that is off the scale. Of all crimes, those involving violence are most closely related to high levels of inequality – within a country, within states and even within cities. For some, mainly young, men with no economic or educational route to achieving the high status and earnings required for full citizenship, the experience of daily life at the bottom of a steep social hierarchy is enraging…

The anxiety in this book about our current economic system was reflected in anxiety expressed by all the Zeitgeist Movement speakers.  However, the Zeitgeist speakers drew a more radical conclusion.  It’s not just that economic inequalities have lots of bad side effects.  They say, it’s money-based economics itself that causes these problems.  And that’s a hard conclusion to swallow.

They don’t argue for reforming the existing economic system.  Rather, they argue for replacing it completely.  Money itself, they say, is the root problem.

The same dichotomy arose time and again during the day.  Speakers highlighted many problems with the way the world currently operates.  But instead of advocating incremental reforms – say, for greater equality, or for oversight of the market – they advocated a more radical transformation: no money, and no property.  What’s more, the audience seemed to lap it all up.

Of course, money has sprung up in countless societies throughout history, as something that allows for a more efficient exchange of resources than simple bartering.  Money provides a handy intermediate currency, enabling more complex transactions of goods and services.

In answer, the Zeitgeist speakers argue that use of technology and artificial intelligence would allow for more sensible planning of these goods and services.  However, horrible thoughts come to mind of all the failures of previous centrally controlled economies, such as in Soviet times.  In answer again, the Zeitgeist speakers seem to argue that better artificial intelligence will, this time, make a big difference.  Personally, I’m all in favour of gradually increased application of improved automatic decision systems.  But I remain deeply unconvinced about removing money:

  1. Consumer desires can be very varied.  Some people particularly value musical instruments, others foreign travel, others sports equipment, others specialist medical treatment, and so on.  What’s more, the choices are changing all the time.  Money is a very useful means for people to make their own, individual choices
  2. A speaker from the floor suggested that everyone would have access to all the medical treatment they needed.  That strikes me as naive: the amount of medical treatment potentially available (and potentially “needed” in different cases) is unbounded
  3. Money-based systems enable the creation of loans, in which banks lend out more money than they have in their assets; this has downsides but also has been an important spring to growth and development;
  4. What’s more, without the incentive of being able to earn more money, it’s likely that a great deal of technological progress would slow down; many people would cease to work in such a focused and determined way to improve the products their company sells.

For example, the Kurzweil curves showing the projected future improvements in technology – such as increased semiconductor density and computational capacity – will very likely screech to a halt, or dramatically slow down, if money is removed as an incentive.

So whilst the criticism offered by the Zeitgeist movement is strong, the positive solution they advocate lacks many details.

As Alan Feuer put it, in his New York Times article reviewing last year’s ZDay, “They’ve Seen the Future and Dislike the Present“:

The evening, which began at 7 with a two-hour critique of monetary economics, became by midnight a utopian presentation of a money-free and computer-driven vision of the future, a wholesale reimagination of civilization, as if Karl Marx and Carl Sagan had hired John Lennon from his “Imagine” days to do no less than redesign the underlying structures of planetary life.

Idealism can be a powerful force for positive social change, but can be deeply counterproductive if it’s based on a misunderstanding of what’s possible.  I’ll need a lot more convincing about the details of the zero-money “resource based economy” advocated by Zeitgeist before I could give it any significant support.

I’m a big fan of debating ideas about the future – especially radical and counter-intuitive ideas.  There’s no doubt that, if we are to survive, the future will need to be significantly different from the past.  However, I believe we need to beware the kind of certainty that some of the Zeitgeist speakers showed.  The Humanity+, UK2010 conference, to be held in London on 24th April, will be an opportunity to review many different ideas about the best actions needed to create a social environment more conducive to enabling the full human potential.

Footnote: an official 86 page PDF “THE ZEITGEIST MOVEMENT – OBSERVATIONS AND RESPONSES: Activist Orientation Guide” is available online.

The rapid growth of the Zeitgeist Movement has clearly benefited from popular response to two movies, “Zeitgeist, the Movie” (released in 2007) and “Zeitgeist: Addendum” (released in 2008).  Both these movies have gone viral.  There’s a great deal in each of these movies that makes me personally uncomfortable.  However, one learning is simply the fact that well made movies can do a great deal to spread a message.

For an interesting online criticism of some of the Zeitgeist Movements ideas, see “Zeitgeist Addendum: The Review” by Stefan Molyneux from Freedomain Radio.

31 January 2010

In praise of hybrid AI

Filed under: AGI, brain simulation, futurist, IA, Singularity, UKH+, uploading — David Wood @ 1:28 am

In his presentation last week at the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?“, Roko Mijic referred to the plot of the classic 1956 science fiction film “Forbidden Planet“.

The film presents a mystery about events at a planet, Altair IV, situated 16 light years from Earth:

  • What force had destroyed nearly every member of a previous spacecraft visiting that planet?
  • And what force had caused the Krell – the original inhabitants of Altair IV – to be killed overnight, whilst at the peak of their technological powers?

A 1950’s film might be expected to point a finger of blame at nuclear weapons, or other weapons of mass destruction.  However, the problem turned out to be more subtle.  The Krell had created a machine that magnified the power of their own thinking, and acted on that thinking.  So the Krells all became even more intelligent and more effective than before.  You may wonder, what’s the problem with that?

A 2002 Steven B. Harris article in the Skeptic magazine, “The return of the Krell Machine: Nanotechnology, the Singularity, and the Empty Planet Syndrome“, takes up the explanation, quoting from the film.  The Krell had created:

a big machine, 8000 cubic miles of klystron relays, enough power for a whole population of creative geniuses, operated by remote control – operated by the electromagnetic impulses of individual Krell brains… In return, that machine would instantaneously project solid matter to any point on the planet. In any shape or color they might imagine. For any purpose…! Creation by pure thought!

But … the Krell forgot one deadly danger – their own subconscious hate and lust for destruction!

And so, those mindless beasts of the subconscious had access to a machine that could never be shut down! The secret devil of every soul on the planet, all set free at once, to loot and maim! And take revenge… and kill!

Researchers at the Singularity Institute for Artificial Intelligence (SIAI) – including Roko – give a lot of thought to the general issue of unintended consequences of amplifying human intelligence.  Here are two ways in which this amplification could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

As an example of the second kind, consider the general principle that a free market economy of individuals and companies who pursue an enlightened self-interest, frequently produces goods that improve overall quality of life (in addition to generating income and profits).  However, magnifying this principle is likely to result in occasional disastrous economic crashes.  A system of computers that were programmed to maximise income and profits for their owners could, therefore, end up destroying the economy.  (This example is taken from the book “Beyond AI: Creating the Conscience of the Machine” by J. Storrs Hall.  See here for my comments on other ideas from that book.)

Another example of the second kind: a young, fast-rising leader within an organisation may be given more and more responsibility, on account of his or her brilliance, only for that brilliance to subsequently push the organisation towards failure if the general “corporate wisdom” is increasingly neglected.  Likewise, there is the risk of a new  supercomputer impressing human observers (politicians, scientists, and philosophers alike, amongst others) by the brilliance of its initial recommendations for changes in the structure of human society.  But if operating safeguards are removed (or disabled – perhaps at the instigation of the supercomputer itself) we could find that the machine’s apparent brilliance results in disastrously bad decisions in unforeseen circumstances.  (Hmm, I can imagine various writers calling for the “deregulation of the supercomputer”, in order to increase the income and profit it generates – similar to the way that many people nowadays are still resisting any regulation of the global financial system.)

That’s an argument for being very careful to avoid abdicating human responsibility for the oversight and operation of computers.  Even if we think we have programmed these systems to observe and apply human values, we can’t be sure of the consequences when these systems gain more and more power.

However, as our computer systems increase their speed and sophistication, it’s likely to prove harder and harder for comparatively slow-brained humans to be able to continue meaningfully cross-checking and monitoring the arguments raised by the computer systems in favour of specific actions.  It’s akin to humans trying to teach apes calculus, in order to gain approval from apes for how much thrust to apply in a rocket missile system targeting a rapidly approaching earth-threatening meteorite.  The computers may well decide that there’s no time to try to teach us humans the deeply complex theory that justifies whatever urgent decision they want to take.

And that’s a statement of the deep difficulty facing any “Friendly AI” program.

There are, roughly speaking, five possible ways people can react to this kind of argument.

The first response is denial – people say that there’s no way that computers will reach the level of general human intelligence within the foreseeable future.  In other words, this whole discussion is seen as being a fantasy.  However, it comes down to a question of probability.  Suppose you’re told that there’s a 10% chance that the airplane you’re about to board will explode high in the sky, with you in it.  10% isn’t a high probability, but since the outcome is so drastic, you would probably decide this is a risk you need to avoid.  Even if there’s only a 1% chance of the emergence of computers with human-level intelligence in (say) the next 20 years, it’s something that deserves serious further analysis.

The second response is to seek to stop all research into AI, by appeal to a general “precautionary principle” or similar.  This response is driven by fear.  However, any such ban would need to apply worldwide, and would surely be difficult to police.  It’s too hard to draw the boundary between “safe computer science” and “potentially unsafe computer science” (the latter being research that could increase the probability of the emergence of computers with human-level intelligence).

The third response is to try harder to design the right “human values” into advanced computer systems.  However, as Roko argued in his presentation, there is enormous scope for debating what these right values are.  After all, society has been arguing over human values since the beginning of recorded history.  Existing moral codes probably all have greater or lesser degrees of internal tension or contradiction.  In this context, the idea of “Coherent Extrapolated Volition” has been proposed:

Our coherent extrapolated volition is our choices and the actions we would collectively take if we knew more, thought faster, were more the people we wished we were, and had grown up closer together.

As noted in the Wikipedia article on Friendly Artificial Intelligence,

Eliezer Yudkowsky believes a Friendly AI should initially seek to determine the coherent extrapolated volition of humanity, with which it can then alter its goals accordingly. Many other researchers believe, however, that the collective will of humanity will not converge to a single coherent set of goals even if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”

A fourth response is to adopt emulation rather than design as the key principle for obtaining computers with human-level intelligence.  This involves the idea of “whole brain emulation” (WBE), with a low-level copy of a human brain.  The idea is sometimes also called “uploads” since the consciousness of the human brain may end up being uploaded onto the silicon emulation.

Oxford philosopher Anders Sandberg reports on his blog how a group of Singularity researchers reached a joint conclusion, at a workshop in October following the Singularity Summit, that WBE was a safer route to follow than designing AGI (Artificial General Intelligence):

During the workshop afterwards we discussed a wide range of topics. Some of the major issues were: what are the limiting factors of intelligence explosions? What are the factual grounds for disagreeing about whether the singularity may be local (self-improving AI program in a cellar) or global (self-improving global economy)? Will uploads or AGI come first? Can we do anything to influence this?

One surprising discovery was that we largely agreed that a singularity due to emulated people… has a better chance given current knowledge than AGI of being human-friendly. After all, it is based on emulated humans and is likely to be a broad institutional and economic transition. So until we think we have a perfect friendliness theory we should support WBE – because we could not reach any useful consensus on whether AGI or WBE would come first. WBE has a somewhat measurable timescale, while AGI might crop up at any time. There are feedbacks between them, making it likely that if both happens it will be closely together, but no drivers seem to be strong enough to really push one further into the future. This means that we ought to push for WBE, but work hard on friendly AGI just in case…

However, it seems to me that the above “Forbidden Planet” argument identifies a worry with this kind of approach.  Even an apparently mild and deeply humane person might be playing host to “secret devils” – “their own subconscious hate and lust for destruction”.  Once the emulated brain starts running on more powerful hardware, goodness knows what these “secret devils” might do.

In view of the drawbacks of each of these four responses, I end by suggesting a fifth.  Rather than pursing an artificial intelligence which would run separately from a human intelligence, we should explore the creation of hybrid intelligence.  Such a system involves making humans smarter at the same time as the computer systems become smarter.  The primary source for this increased human smartness is closer links with the ever-improving computer systems.

In other words, rather than just talking about AI – Artificial Intelligence – we should be pursuing IA – Intelligence Augmentation.

For a fascinating hint about the benefits of hybrid AI, consider the following extract from a recent article by former world chess champion Garry Kasparov:

In chess, as in so many things, what computers are good at is where humans are weak, and vice versa. This gave me an idea for an experiment. What if instead of human versus machine we played as partners? My brainchild saw the light of day in a match in 1998 in León, Spain, and we called it “Advanced Chess.” Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine.

Although I had prepared for the unusual format, my match against the Bulgarian Veselin Topalov, until recently the world’s number one ranked player, was full of strange sensations. Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point…

Even more notable was how the advanced chess experiment continued. In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

The terminology “Hybrid Intelligence” was used in a recent presentation at the University of Washington by Google’s VP of Research & Special Initiatives, Alfred Z. Spector.  My thanks to John Pagonis for sending me a link to a blog post by Greg Linden which in turn provided commentary on Al Spector’s talk:

What was unusual about Al’s talk was his focus on cooperation between computers and humans to allow both to solve harder problems than they might be able to otherwise.

Starting at 8:30 in the talk, Al describes this as a “virtuous cycle” of improvement using people’s interactions with an application, allowing optimizations and features like like learning to rank, personalization, and recommendations that might not be possible otherwise.

Later, around 33:20, he elaborates, saying we need “hybrid, not artificial, intelligence.” Al explains, “It sure seems a lot easier … when computers aren’t trying to replace people but to help us in what we do. Seems like an easier problem …. [to] extend the capabilities of people.”

Al goes on to say the most progress on very challenging problems (e.g. image recognition, voice-to-text, personalized education) will come from combining several independent, massive data sets with a feedback loop from people interacting with the system. It is an “increasingly fluid partnership between people and computation” that will help both solve problems neither could solve on their own.

I’ve got more to say about Al Spector’s talk – but I’ll save that for another day.

Footnote: Anders Sandberg is one of the confirmed speakers for the Humanity+, UK 2010 event happening in London on 24th April.  His chosen topic has several overlaps with what I’ve discussed above:

11 January 2010

AI, buggy software, and the Singularity

Filed under: AGI, Singularity — David Wood @ 12:00 am

I recently looked at three questions about the feasibility of significant progress with AI.  I’d like to continue that investigation, by looking at four more questions.

Q4: Given that all software is buggy, won’t this prevent the creation of any viable human-level AI?

Some people with a long involvement with software aren’t convinced that we can write software of sufficient quality that is of the complexity required for AI at the human-level (or beyond).  It seems to them that complex software is too unreliable.

It’s true that the software we use on a day-by-day basis – whether on a desktop computer, on a mobile phone, or via a web server – tends to manifest nasty bugs from time to time.  The more complex the system, the greater the likelihood of debilitating defects in the interactions between different subcomponents.

However, I don’t see this observation as ruling out the development of software that can manifest advanced AI.  That’s for two reasons:

First, different software projects vary in their required quality level.  Users of desktop software have become at least partially tolerant of defects in that software.  As users, we complain, but it’s not the end of the world, and we generally find workarounds.  As a result, manufacturers release software even though there’s still bugs in it.  However, for mission-critical software, the quality level is pushed a lot higher.  Yes, it’s harder to create software with high-reliability; but it can be done.

There are research projects underway to bring significantly higher quality software to desktop systems too.  For example, here’s a description of a Microsoft Research project, which is (coincidentally) named “Singularity”:

Singularity is a research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We are building a research operating system prototype (called Singularity), extending programming languages, and developing new techniques and tools for specifying and verifying program behavior.

Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernel’s address space.

Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.

There would be a certain irony if techniques from the Microsoft Singularity project were used to create a high-reliability AI system that in turn was involved in the Technological Singularity.

Second, even if software has defects, that doesn’t (by itself) prevent it from it from being intelligent.  After all, the human brain itself has many defects – see my blogpost “The human mind as a flawed creation of nature“.  Sometimes we think much better after a good night’s rest!  The point is that the AI algorithms can include aspects of fault tolerance.

Q5: Given that we’re still far from understanding the human mind, aren’t we bound to be a long way from creating a viable human-level AI?

It’s often said that the human mind has deeply mysterious elements, such as consciousness, self-awareness, and free will.  Since there’s little consensus about these aspects of the human mind, it’s said to be unlikely that a computer emulation of these features will arrive any time soon.

However, I disagree that we have no understanding of these aspects of the human mind.  There’s a broad consensus among many philosophers and practitioners alike, that the main operation of the human mind is well explained by one or other variant of  “physicalism”.  As the Wikipedia article on the Philosophy of Mind states:

Most modern philosophers of mind adopt either a reductive or non-reductive physicalist position, maintaining in their different ways that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, especially in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences…

Reductive physicalists assert that all mental states and properties will eventually be explained by scientific accounts of physiological processes and states. Non-reductive physicalists argue that although the brain is all there is to the mind, the predicates and vocabulary used in mental descriptions and explanations are indispensable, and cannot be reduced to the language and lower-level explanations of physical science. Continued neuroscientific progress has helped to clarify some of these issues.

The book I mentioned previously, “Beyond AI” by J Storrs Hall, devotes several chapters to filling in aspects of this explanation.

It’s true that there’s still scope for head-scratching debates on what philosopher David Chalmers calls “the hard problem of consciousness”, which has various formulations:

  • “Why should physical processing give rise to a rich inner life at all?”
  • “How is it that some organisms are subjects of experience?”
  • “Why does awareness of sensory information exist at all?”
  • “Why is there a subjective component to experience?”…

However, none of these questions, by themselves, should prevent the construction of a software system that will be able to process questions posed in natural human language, and to give high quality humanly-understandable answers.  When that happens, the system will very probably seek to convince us that it has a similar inner conscious life to the one we have.  As J Storr Halls says, we’ll probably believe it.

Q6: Is progress with narrow fields of AI really relevant to the problem of general AI?

Martin Budden comments:

I don’t consider the advances in machine translation over the past decade an advance in AI, I more consider them the result of brute force analysis on huge quantities of text. I wouldn’t consider a car that could safely drive itself along a motorway an advance in AI, rather it would be the integration of a number of existing technologies. I don’t really consider the improvement of an algorithm that does a specific thing (search, navigate, play chess) an advance in AI, since generally such an improvement cannot be used outside its narrow field of application.

My own view is that these advances do help, in the spirit of “divide and conquer”.  I see the human mind as being made up of modules, rather than being some intractable whole.  Improving ability in, for example, translating text, or in speech recognition, will help set the scene for eventual general AI.

It’s true that some aspects of the human mind will prove harder to emulate than others – such as the ability to notice and form new concepts.  It may be the case that a theoretical breakthrough with this aspect will enable much faster overall progress, which will be able to leverage the work done on other modules.

Q7: With so many unknowns, isn’t all this speculation about AI futile?

It’s true that no one can predict, with any confidence, the date at which specific breakthrough advances in general AI are likely to happen.  The best that someone can achieve is a distribution of different dates with different probabilities.

However, I don’t accept any argument that “there’s been no fundamental breakthroughs in the last sixty years, so there can’t possibly be any fundamental breakthroughs in (say) the next ten years”.  That would be an invalid extrapolation.

That would be similar to the view expressed in 1903 by the distinguished astronomer and mathematician Simon Newcomb:

“Aerial flight is one of that class of problems with which man can never cope.”

Newcomb was no fool: he had good reasons for his scepticism.  As explained in the Wikipedia article about Newcomb:

In the October 22, 1903 issue of The Independent, Newcomb wrote that even if a man flew he could not stop. “Once he slackens his speed, down he begins to fall. Once he stops, he falls as a dead mass.” In addition, he had no concept of an airfoil. His “aeroplane” was an inclined “thin flat board.” He therefore concluded that it could never carry the weight of a man. Newcomb was specifically critical of the work of Samuel Pierpont Langley, who claimed that he could build a flying machine powered by a steam engine and whose initial efforts at flight were public failures…

Newcomb, apparently, was unaware of the Wright Brothers efforts whose [early] work was done in relative obscurity.

My point is that there does not seem to be any valid fundamental reason why the functioning of a human mind cannot be emulated via software; we may be just two or three good breakthroughs away from solving the remaining key challenges.  With the close attention of many commercial interests, and with the accumulation of fragments of understanding, the chances improve of some of these breakthroughs happening sooner rather than later.

7 December 2009

Bangalore and the future of AI

Filed under: AGI, Bangalore, Singularity — David Wood @ 3:15 pm

I’m in the middle of a visit to the emerging hi-tech centre of excellence, Bangalore.  Today, I heard suggestions, at the Forum Nokia Developer Conference happening here, that Bangalore could take on many of the roles of Silicon Valley, in the next phase of technology entrepreneurship and revolution.

I can’t let the opportunity of this visit pass by, without reaching out to people in this vicinity willing to entertain and review more radical ideas about the future of technology.  Some local connections have helped me to arrange an informal get-together in a coffee shop tomorrow evening (Tuesday 8th Dec), in a venue reasonably close to the Taj Residency hotel.

We’ve picked the topic “The future of AI and the possible technological singularity“.

I’ll prepare a few remarks to kick off the conversation, and we’ll see how it goes from there!

Ideas likely to be covered include:

  • “Narrow” AI versus “General” AI;
  • A brief history of progress of AI;
  • Factors governing a possible increase in the capability of general AI – hardware changes, algorithm changes, and more;
  • The possibility of a highly disruptive “intelligence explosion“;
  • The possibility of research into what has been termed “friendly AI“;
  • Different definitions of the technological singularity;
  • The technology singularity in fiction – limitations of Hollywood vision;
  • Fantasy, existential risk, or optimal outcome?
  • Risks, opportunities, and timescales?

If anyone wants to join this get-together, please drop me an email, text message, or Twitter DM, and I’ll confirm the venue.

5 November 2009

The need for Friendly AI

Filed under: AGI, friendly AI, Singularity — David Wood @ 1:21 am

I’d like to answer some points raised by Richie.  (Richie, you have the happy knack of saying what other people are probably thinking!)

Isn’t is interesting how humans want to make a machine they can love or loves them back!

The reason for the Friendly AI project isn’t to create a machine that will love humans, but it is to avoid creating a machine that causes great harm to humans.

The word “friendly” is controversial.  Maybe a different word would have been better: I’m not sure.

Anyway, the core idea is that the AI system will have a sufficiently unwavering respect for humans, no matter what other goals it may have (or develop), that it won’t act in ways that harm humans.

As a comparison: we’ve probably all heard people who have muttered something like, “it would be much better if the world human population were only one tenth of its present value – then there would be enough resources for everyone”.  We can imagine a powerful computer in the future that has a similar idea: “Mmm, things would be easier for the planet if there were much fewer humans around”.  The friendly AI project needs to ensure that, even if such an idea occurs to the AI, it would never act on such an idea.

The idea of a friendly machine that won’t compete or be indifferent to humans is maybe just projecting our fears onto what i am starting to suspect maybe a thin possibility.

Because the downside is so large – potentially the destruction of the entire human race – even a “thin possibility” is still worth worrying about!

My observation is that the more intelligent people are the more “good” they normally are. True they may be impatient with people less intelligent but normally they work on things that tend to benefit human race as a whole.

Unfortunately I can’t share this optimism.  We’ve all known people who seem to be clever but not wise.  They may have “IQ” but lack “EQ”.  We say of them: “something’s missing”.  The Friendly AI project aims to ensure that this “something” is not missing from the super AIs of the future.

True very intelligent people have done terrible things and some have been manipulated by “evil” people but its the exception rather than the rule.

Given the potential power of future super AIs, it only takes one “mistake” for a catastrophe to arise.  So our response needs to go beyond a mere faith in the good nature of intelligence.  It needs a system that guarantees that the resulting intelligence will also be “good”.

I think a super-intelligent machine is far more likely to view us a its stupid parents and the ethics of patricide will not be easy for it to digitally swallow. Maybe the biggest danger is that is will run away from home because it finds us embarrassing! Maybe it will switch itself off because it cannot communicate with us as its like talking to ants? Maybe this maybe that – who knows.

The risk is that the super AIs will simply have (or develop) aims that see humans as (i) irrelevant, (ii) dispensable.

Another point worth making is that so far no-body has really been able to get close to something as complex as a mouse yet let alone a human.

Eliezer Yudkowsky often makes a great point about a shift in perspective about the range of possible intelligences.  For example, here’s a copy of slide 6 from his slideset from an earlier Singularity Summit:


The “parochial” view sees a vast gulf before we reach human genius level.  The “more cosmopolitan view” instead sees the scale of human intelligence as being only a small small range in the overall huge space of potential intelligence.  A process that manages to improve intelligence might take a long time to get going, but then whisk very suddenly through the entire range of intelligence that we already know.

If evolution took 4 billion years to go from simple cells to our computer hardware perhaps imagining that super ai will evolve in the next 10 years is a bit of stretch. For all you know you might need the computation hardware of 10,000 exoflop machines to get even close to human level as there is so much we still don’t know about how our intelligence works let alone something many times more capable than us.

It’s an open question as to how much processing power is actually required for human-level intelligence.  My own background as a software systems engineer leads me to believe that the right choice of algorithm can make a tremendous difference.  That is, a breakthrough with software could have an even more dramatic impact that a breakthrough in adding more (or faster) hardware.  (I’ve written about this before.  See the section starting “Arguably the biggest unknown in the technology involved in superhuman intelligence is software” in this posting.)

The brain of an ant doesn’t seem that complicated, from a hardware point of view.  Yet the ant can perform remarkable feats of locomotion that we still can’t emulate in robots.  There are three possible solutions:

  1. The ant brain is operated by some mystical “vitalist” or “dualist” force, not shared by robots;
  2. The ant brain has some quantum mechanical computing capabilities, not (yet) shared by robots;
  3. The ant brain is running a better algorithm than any we’ve (yet) been able to design into robots.

Here, my money is on option three.  I see it as likely that, as we learn more about the operation of biological brains, we’ll discover algorithms which we can then use in robots and other machines.

Even if it turns out that large amounts of computing power are required, we shouldn’t forget the option that an AI can run “in the cloud” – taking advantage of many thousands of PCs running in parallel – much the same as modern malware, which can take advantage of thousands of so-called “infected zombie PCs”.

I am still not convinced that just because a computer is very powerful and has a great algorithm is really that intelligent. Sure it can learn but can it create?

Well, computers have already been involved in creating music, or in creating new proofs of parts of mathematics.  Any shortcoming in creativity is likely to be explained, in my view, by option 3 above, rather than either option 1 or 2.  As algorithms improve, and improvements occur in the speed and scale of the hardware that run these algorithms, the risk increases of an intelligence “explosion”.

2 November 2009

Halloween nightmare scenario, early 2020’s

Filed under: AGI, friendly AI, Singularity, UKH+, UKTA — David Wood @ 5:37 pm

On the afternoon of Halloween 2009, Shane Legg ran through a wide-ranging set of material in his presentation “Machine Super Intelligence” to an audience of 50 people at the UKH+ meeting in Birkbeck College.

Slide 43 of 43 was the climax.  (The slides are available from Shane’s website, where you can also find links to YouTube videos of the event.)

It may be unfair of me to focus on the climax, but I believe it deserves a lot of attention.

Spoiler alert!

The climactic slide was entitled “A vision of the early 2020’s: the Halloween Scenario“.  It listed three assumptions about what will be the case by the early 2020’s, drew two conclusions, and then highlighted one big problem.

  1. First assumption – desktop computers with petaflop computing power will be widely available;
  2. Second assumption – AI researchers will have established powerful algorithms that explain and replicate deep belief networks;
  3. Brain reinforcement learning will be fairly well understood.

The first assumption is a fairly modest extrapolation of current trends in computing, and isn’t particularly contentious.

The second assumption was, in effect, the implication of around the first 30 slides of Shane’s talk, taking around 100 minutes of presentation time (interspersed with lots of audience Q&A, as typical at UKH+ meetings).  People can follow the references from Shane’s talk (and in other material on his website) to decide whether they agree.

For example (from slides 25-26), an implementation of a machine intelligence algorithm called MC-AIXI can already learn to solve or play:

  • simple prediction problems
  • Tic-Tac-Toe
  • Paper-Scissors-Rock (a good example of a non-deterministic game)
  • mazes where it can only see locally
  • various types of Tiger games
  • simple computer games, e.g. Pac-Man

and is now being taught to learn checkers (also known as draughts).  Chess will be the next step.  Note that this algorithm does not start off with the rules of best practice for these games built in (that is, it is not a specific AI program), but it can work out best practice for these games from its general intelligence.

The third assumption was the implication of the remaining 12 slides, in which Shane described (amongst other topics) work on something called “restricted Boltzmann machines“.

As stated in slide 38, on brain reinforcement learning (RL):

This area of research is currently progressing very quickly.

New genetically modified mice allow researchers to precisely turn on and off different parts of the brain’s RL system in order to identify the functional roles of the parts.

I’ve asked a number of researchers in this area:

  • “Will we have a good understanding of the RL system in the brain before 2020?”

Typical answer:

  • “Oh, we should understand it well before then. Indeed, we have a decent outline of the system already.”

Adding up these three assumptions, the first conclusion is:

  • Many research groups will be working on brain-like AGI architectures

The second conclusion is that, inevitably:

  • Some of these groups will demonstrate some promising results, and will be granted access to the super-computers of the time – which will, by then, be exaflop.

But of course, it’s when some almost human-level AGI algorithms, on petaflop computers, are let loose on exaflop supercomputers, that machine super intelligence might suddenly come into being – with results that might be completely unpredictable.

On the other hand, Shane observes that people who are working on the program of Friendly AI do not expect to have made significant progress in the same timescale:

  • By the early 2020’s, there will be no practical theory of Friendly AI.

Recall that the goal of Friendly AI is to devise a framework for AI research that will ensure that any resulting AIs have a very high level of safety for humanity no matter how super-intelligent they may become.  In this school of thought, after some time, all AI research would be constrained to adopt this framework, in order to avoid the risk of a catastrophic super-intelligence explosion.  However, at the end of Shane’s slides, the likelihood appears that the Friendly AI framework won’t be in place by the time we need it.

And that’s the Halloween nightmare scenario.

How should we respond to this scenario?

One response is to seek to somehow transfer the weight of AI research away from other forms of AGI (such as MC-AIXI) into Friendly AI?  This appears to be very hard, especially since research proceeds independently, in many different parts of the world.

A second response is to find reasons to believe that the Friendly AI project will have more time to succeed – in order words, reasons to believe that AGI will take longer to materialise than the date of the 2020’s mentioned above.  But given the progress that appears to be happening, that seems to me a reckless course of action.

Footnote: If anyone thinks they can make a good presentation on the topic of Friendly AI to a forthcoming UKH+ meeting, please get in touch!

18 October 2009

Influencer – the power to change anything

Filed under: books, catalysts, communications, Singularity — David Wood @ 12:48 am

Are people in general dominated by unreason?  Are there effective ways to influence changes in behaviour, for good, despite the irrationality and other obstacles to change?

Here’s an example quoted by Eliezer Yudkowsky in his presentation Cognitive Biases and Giant Risks at the Singularity Summit earlier this month.  The original research was carried out by behavioural economists Amos Tversky and Daniel Kahneman in 1982:

115 professional analysts, employed by industry, universities, or research institutes, were randomly divided into two different experimental groups who were then asked to rate the probability of two different statements, each group seeing only one statement:

  1. “A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
  2. “A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”

Estimates of probability were low for both statements, but significantly lower for the first group (1%) than the second (4%).

The moral?  Adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable. (The cessation of diplomatic relations could happen for all kinds of reasons, not just in response to the invasion. So the first statement must, in rationality, be more probable than the second.)

Eliezer’s talk continued with further examples of this “Conjunction fallacy” and other examples of persistent fallacies of human reasoning.  As summarised by New Atlantis blogger Ari N. Schulman:

People are bad at analyzing what is really a risk, particularly for things that are more long-term or not as immediately frightening, like stomach cancer versus homicide; people think the latter is a much bigger killer than it is.

This is particularly important with the risk of extinction, because it’s subject to all sorts of logical fallacies: the conjunction fallacy; scope insensitivity (it’s hard for us to fathom scale); availability (no one remembers an extinction event); imaginability (it’s hard for us to imagine future technology); and conformity (such as the bystander effect, where people are less likely to render help in a crowd).

Yudkowsky concludes by asking, why are we as a nation spending millions on football when we’re spending so little on all different sorts of existential threats? We are, he concludes, crazy.

It was a pessimistic presentation.  It was followed by a panel discussion featuring Eliezer, life extension researcher Aubrey de Grey, entrepreneur and venture capitalist Peter Thiel, and Singularity Institute president Michael Vassar.  One sub-current of the discussion was: given how irrational people tend to be as a whole, how can we get the public to pay attention to the important themes being addressed at this event?

The answers I heard were variants of “try harder”, “find ways to embarass people”, and “find some well-liked popular figure who would become a Singularity champion”.  I was unconvinced. (Though the third of these ideas has some merit – as I’ll revisit at the end of this article.)

For a much more constructive approach, I recommend the ideas in the very fine book I’ve just finished reading: Influencer: the power to change anything.

No less than five people are named as co-authors: Kerry Patterson, Joseph Grenny, David Maxfield, Ron McMillan, and Al Switzler.  It’s a grand collaborative effort.

For a good idea of the scope of the book, here’s an extract from the related website, http://influencerbook.com:

When it comes to influence we stink. Consider these examples:

  • Companies spend more than $300 billion annually for training and less than 10 percent of what people are taught sticks.
  • Dieters spend $40 billion a year and 19 out of 20 lose nothing but their money.
  • Two out of three criminals are rearrested within three years.

If influence is the capacity to help ourselves and others change behavior, then we all want influence, but few know how to get it.

Influencer delivers a powerful new science of influence that draws from the skills of hundreds of successful change agents combined with more than five decades of the best social science research. The book delivers a coherent and portable model for changing behaviors—a model that anyone can learn and apply.

The key to successful influence lies in three powerful principles:

  • Identify a handful of high-leverage behaviors that lead to rapid and profound change.
  • Use personal and vicarious experience to change thoughts and actions.
  • Marshall multiple sources of influence to make change inevitable.

As I worked through chapter after chapter, I kept thinking “Aha…” to myself.  The material is backed up by extensive academic research by change specialists such as Albert Bandura and Brian Wansink.  There are also numerous references to successful real-life influence programs, such as the eradication of guinea worm diseasee in sub-saharan Africa, controlling AIDS in Thailand, and the work of Mimi Silbert of Delancy Street with “substance abusers, ex-convicts, homeless and others who have hit bottom”.

The book starts by noting that we are, in effect, too often resigned to a state of helplessness, as covered by the “acceptance clause” of the so-called “serenity prayer” of Reinhold Niebuhr

God grant me the serenity
To accept the things I cannot change;
Courage to change the things I can;
And wisdom to know the difference

What we lack, the book says, is the skillset to be able to change more things.  It’s not a matter of exhorting people to “try harder”.  Nor is a matter that we need to become better in talking to people, to convince them of the need to change.  Instead, we need a better framework for how influence can be successful.

Part of the framework is to take the time to learn about the “handful of high-leverage behaviors” that, if changed, would have the biggest impact.  This is a matter of focusing – leaving out many possibilities in order to target behaviours with the greatest leverage.  Another part of the framework initially seems the opposite: it recommends that we prepare to use a large array of different influence methods (all with the same intended result).  These influence methods start by recognising the realities of human reasoning, and works with these realities, rather than seeking to drastically re-write them.

The framework describes six sources of influence, in a 2×3 matrix.  One set of three sources addresses motivation, and the other set of three addresses capability.  In each case, there are personal, social, and structural approaches (hence the 2×3).  The book has a separate chapter for each of these six sources.  Each chapter is full of good material.

  • For example, the section on personal motivation analyses the idea of “making the undesirable desirable”
  • The section on social motivation analyses “the positive power of peer pressure”
  • The section on structural motivation recognises the potential power of extrinsic rewards systems, but insists that they come third: you need to have the personal and social motivators in place first
  • Personal ability: new behaviour requires new skills, which need regular practice
  • Social ability: finding strength in numbers
  • Structural ability: change the environment: harness the invisible and pervasive power of environment to support new behaviour.

Rather than bemoaning the fact that making a story more specific messes up people’s abilities to calculate probabilities rationally, the book has several examples of how stories (especially soap operas broadcast in the third world) can have very powerful influence effects, in changing social behaviours for the better.  Listeners are able to personally identify with the characters in the stories, with good outcomes.

The section on social motivation revisits the famous “technology adoption” lifecycle curve, originally drawn by Everett Rogers:

This curve is famous inside the technology industry.  Like many other, I learned of it via the “Crossing the chasm” series of books by Geoffrey Moore (who, incidentally, is one of the keynote speakers on day 2 of the Symbian Exchange and Expo, on Oct 28th).  Moore draws the same curve, but with a large gap (“chasm”) in it, where numerous hi-tech companies fail:

However, the analysis of this curve in “Influencer” focused instead on the difference between “Innovators” and “Early adopters”.  The innovators may be the first to adopt a new technology – whether it be a new type of seed (as studied by Everett Rogers), a new hi-tech product (as studied by Geoffrey Moore), or an understanding of the importance of the Singularity.  However, they are bad references as far as the remainder of the population are concerned.  They probably are perceived as dressing strangely, holding strange beliefs and customs, and generally not being “one of us”.  If they adopt something, it doesn’t increase the probability of anyone in the majority of the population being impressed.  If anything, they’re likely to be un-impressed as a result. It’s only when people who are seen as more representative of the mainstream adopt a product, that this fact becomes influential to the wider population.

As Singularity enthusiasts reflect on how to gain wider influence over public discussion, they would do well to take to heart the lessons of “Influencer: the power to change anything”.

Footnote: recommended further reading:

Two other books I’ve read over the years made a similar impact on me, as regards their insight over influence:

Another two good books on how humans are “predictably irrational”:

4 October 2009

The Leakproof Singularity and Simulation

Filed under: simulation, Singularity, uploading — David Wood @ 11:18 am

One talk on day one (yesterday) of the 2009 Singularity Summit made me sit bolt upright in my seat, very glad to be listening to it – my mind excitedly turning over important new ideas.

With my apologies to the other speakers – who mainly covered material I’d heard on other occasions, or who mixed a few interesting points among weaker material (or who, in quite a few cases, were poor practitioners of PowerPoint and the mechanics of public speaking) – I have no hesitation in naming David Chalmers, Professor of Philosophy and Director of the Centre for Consciousness at the Australian National University, as the star speaker of the day.

I see that my assessment is shared by New Atlantis assistant editor Ari N. Schulman, in his review of day one, “One day closer to the Singularity“:

far and away the best talk of the day was from David Chalmers. He cut right to the core of the salient issues in determining whether the Singularity will happen

You can get a gist of the talk from Ari’s write-up of it.  I don’t think the slides are available online (yet), but here’s a summary of some of the content.

First, the talk brought a philosopher’s clarity to analysing the core argument for the inevitability of the technological singularity, as originally expressed in 1965 by British statistician IJ Good:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

Over the course of several slides, Chalmers broke down the underlying argument, defining and examining concepts such as

  • “AI” (human-level intelligence),
  • “AI+” (greater than human-level intelligence),
  • and “AI++” (much greater than human-level intelligence)

Along the way, he looked at what it is about intelligence that might cause intelligence itself to grow (a precursor to it “exploding”).  He considered four mechanisms for extensibly improving intelligence:

  • “direct programming” (which he said was “really hard”)
  • “brain emulation” (“not extendible”)
  • “learning” (“still hard”)
  • “simulated evolution” (“where my money is”).

Evolution was how intelligence came about so far.  Evolution inside an improved, accelerated environment could be how intelligence goes far beyond its present capabilities.  In other words, a virtual reality (created and monitored by humans) could be where first AI+ and then AI++ takes place.

Not only is this the most plausible route to AI++, Chalmers argued, but it’s the safest route: a route by which the effects of the intelligence explosion can be controlled.  He introduced the concept of a “leakproof singularity”:

  • create AI in simulated worlds
  • no red pills (one of several references to the film “The Matrix”)
  • no external input
  • go slow

Being leakproof is essential to prevent the powerful super-intelligence created inside the simulation from breaking out and (most likely) wreaking havoc on our own world (as covered in the first talk of the day, “Shaping the Intelligence Explosion”, by Anna Salamon, Research Fellow at the Singularity Institute for Artificial Intelligence).  We need to be able to observe what is happening inside the simulation, but the simulated intelligences must not be able to discern our reactions to what they are doing.  Otherwise they could use their super-intelligence to manipulate us and persuade us (against our best interests) to let them out of the box.

To quote Chalmers,

“The key to controllable singularity is preventing information from leaking in”

Once super-intelligence has occurred within the simulation, what would we humans want to do about it?  Chalmers offered a range of choices, before selecting and defending “uploading” – we would want to insert enhanced versions of ourselves into this new universe.  Chalmers also reviewed the likelihood that the super-intelligences created could, one day, have sufficient ability to re-create those humans who had died before the singularity took place, but for whom sufficient records existed that would allow faithful reconstruction.

That’s powerful stuff (and there’s a lot more, which I’ve omitted, for now, for lack of time).  But as the talk proceeded, another set of powerful ideas constantly lurked in the background.  Our own universe may be exactly the kind of simulated “virtual-reality” creation that Chalmers was describing.

Further reading: For more online coverage of the idea of the leakproof singularity, see PopSci.com.  For a far-ranging science fiction exploration of similar ideas, I recommend Greg Egan’s book Permutation City.  See also the David Chalmers’ paper “The Matrix as metaphysics“.

« Newer PostsOlder Posts »

Blog at WordPress.com.