dw2

25 January 2010

Registration is open: Humanity+, UK 2010

Filed under: UKH+ — David Wood @ 2:44 pm

The Humanity+, UK 2010 conference, to be held in central London on Saturday 24th April, will be tackling some big questions:

  • How will accelerating technological change effect human mental and physical capabilities?
  • What opportuntities and problems will these changes bring?
  • How will these changes impact the environment?
  • How will society, political leaders and institutions react?

In short, Humanity+, UK 2010 is a conference about the future of humanity and the future of technology – and about the forthcoming radical impact of technology on our lives.

Along with some friends and colleagues, I’ve been helping to arrange a first-class line up of speakers for this event.  The agenda is available online:

9:15 – 9:45 REGISTRATION & NETWORKING
9:45 – 10:00 The Humanity+ agenda
David Wood
10:00 – 10.40 Singularity Skepticism: Exposing Exponential Errors
Max More
10.40 – 11.10 Making humans smarter via cognitive enhancers
Anders Sandberg
11.10 – 11.40 The impact of living technology on the future of humanity
Rachel Armstrong
11.40 – 12.00 Panel Discussion: The Future of Humanity – Part 1
12:00 – 13:00 LUNCH & POSTER SESSION
13:00 – 13:40 Human regenerative engineering – theory and practice
Aubrey de Grey
13:40 – 14:10 The Abolitionist Project: Can biotechnology abolish suffering throughout the living world?
David Pearce
14:10 – 14:40 Augmented perception and Transhumanist Art
Amon Twyman
14:40 – 15:00 AFTERNOON BREAK
15:00 – 15:30 DIY Enhancement
Natasha Vita-More
15:30 – 16:00 1: The Singularity University; 2: The Internet of Things
David Orban
16:00 – 16:40 Reducing Existential Risks
Nick Bostrom
16:40 – 17:00 Panel Discussion: The Future of Humanity – Part 2

The speakers include many of the pioneering thinkers of the modern transhumanist movement – also known as “Humanity+“.  It’s a welcome outcome that they will all be in London on the same day.

The conference website is now open for registration.

One reason to register early is that we are planning to organise a dinner in the evening, where it will be possible to continue the discussions from the day, in the company of several of the speakers.  Attendance at the dinner will, necessarily, be limited.  We’ll offer tickets to the people who have already registered, in the order in which they registered 🙂

If you’re not yet sure whether this conference will be of interest to you, then keep an eye on the conference blog in the weeks ahead, where we’ll be discussing more aspects of what the speakers are likely to cover in their talks.

If any member of the press would like to book interview time with one or more of the speakers, please get in touch.

9 January 2010

Progress with AI

Filed under: AGI, books, m2020, Moore's Law, UKH+ — David Wood @ 9:47 am

Not everyone shares my view that AI is going to become a more and more important field during the coming decade.

I’ve received a wide mix of feedback in response to:

  • and my comments made in other discussion forums about the growth of AI.

Below, I list some of the questions people have raised – along with my answers.

Note: my answers below are informed by (among other sources) the 2007 book “Beyond AI: creating the conscience of the machine“, by J Storrs Hall, that I’ve just finished reading.

Q1: Doesn’t significant progress with AI presuppose the indefinite continuation of Moore’s Law, which is suspect?

There are three parts to my answer.

First, Moore’s Law for exponential improvements in individual hardware capability seems likely to hold for at least another five years, and there are many ideas for new semiconductor innovations that would extend the trend considerably further.  There’s a good graph of improvements in supercomputer power stretching back to 1960 on Shane Legg’s website, along with associated discussion.

Dylan McGrath, writing in EE Times in June 2009, reported views from iSuppli Corp that “Equipment cost [will] hinder Moore’s Law in 2014“:

Moore’s Law will cease to drive semiconductor manufacturing after 2014, when the high cost of chip manufacturing equipment will make it economically unfeasible to do volume production of devices with feature sizes smaller than 18nm, according to iSuppli Corp.

While further advances in shrinking process geometries can be achieved after the 20- to 18nm nodes, the rising cost of chip making equipment will relegate Moore’s Law to the laboratory and alter the fundamental economics of the semiconductor industry, iSuppli predicted.

“At those nodes, the industry will start getting to the point where semiconductor manufacturing tools are too expensive to depreciate with volume production, i.e., their costs will be so high, that the value of their lifetime productivity can never justify it,” said Len Jelinek, director and chief semiconductor manufacturing iSuppli, in a statement.

In other words, it remains technological possible that semiconductors can become exponentially denser even after 2014, but it is unclear that sufficient economic incentives will exist for these additional improvements.

As The Register reported the same story:

Basically, just because chip makers can keep adding cores, it doesn’t mean that the application software and the end user workloads that run on this iron will be able to take advantage of these cores (and their varied counts of processor threads) because of the difficulty of parallelising software.

iSuppli is not talking about these problems, at least not today. But what the analysts at the chip watcher are pondering is the cost of each successive chip-making technology and the desire of chip makers not to go broke just to prove Moore’s Law right.

“The usable limit for semiconductor process technology will be reached when chip process geometries shrink to be smaller than 20 nanometers (nm), to 18nm nodes,” explains Len Jelinek…

At that point, says Jelinek, Moore’s Law becomes academic, and chip makers are going to extend the time they keep their process technologies in the field so they can recoup their substantial investments in process research and semiconductor manufacturing equipment.

However, other analysts took a dim view of this pessimistic forecast, and maintain that Moore’s Law will be longer lived.  For example, In-Stat’s chief technology strategist, Jim McGregor, offered the following rebuttal:

…every new technology goes over some road-bumps, especially involving start-up costs, but these tend to drop rapidly once moved into regular production. “EUV [extreme ultraviolet] will likely be the next significant technology to go through this cycle,” McGregor told us.

McGregor did concede that the lifecycle of certain technologies is being extended by firms who are in some cases choosing not to migrate to every new process node, but he maintained new process tech is still the key driver of small design geometries, including memory density, logic density, power consumption, etc.

“Moore’s Law also improves the cost per device and per wafer,” added McGregor, who also noted that “the industry has and will continue to go through changes because of some of the cost issues.” These include the formation of process development alliances, like IBM’s alliances, the transition to foundry manufacturing, and design for manufacturing techniques like computational lithography.

“Many people have predicted the end of Moore’s Law and they have all been wrong,” sighed McGregor. The same apparently goes for those foolhardy enough to attempt to predict changes in the dynamics of the semiconductor industry.

“There have always been challenges to the semiconductor technology roadmap, but for every obstacle, the industry has developed a solution and that will continue as long as we are talking about the hundreds of billion of dollars in revenue that are generated every year,” he concluded.

In other words, it is likely that, given sufficient economic motivation, individual hardware performance will continue improving, at a significant rate (if, perhaps, not exponentially) throughout the coming decade.

Second, it remains an open question as to how much hardware would be needed, to host an Artificial (Machine) Intelligence (“AI”) that has either human-level or hyperhuman reasoning power.

Marvin Minsky, one of the doyens of AI research, has been quoted as believing that computers commonly available in universities and industry already have sufficient power to manifest human-level AI – if only we could work out how to program them in the right way.

J. Storr Hall provides an explanation:

Let me, somewhat presumptuously, attempt to explain Minsky’s intuition by an analogy: a bird is our natural example of the possibility of heavier-than-air flight. Birds are immensely complex: muscles, bones, feathers, nervous systems. But we can build working airplanes with tremendously fewer moving parts. Similarly, the brain can be greatly simplified, still leaving an engine capable of general conscious thought.

Personally, I’m a big fan of the view that the right algorithm can make a tremendous difference to a computational task.  As I noted in a 2008 blog post:

Arguably the biggest unknown in the technology involved in superhuman intelligence is software. Merely improving the hardware doesn’t necessarily mean the the software performance increases to match. As has been remarked, “software gets slower, more rapidly than hardware gets faster”. (This is sometimes called “Wirth’s Law”.) If your algorithms scale badly, fixing the hardware will just delay the point where your algorithms fail.

So it’s not just the hardware that matters – it’s how that hardware is organised. After all, the brains of Neanderthals were larger than those of humans, but are thought to have been wired up differently to ours. Brain size itself doesn’t necessarily imply intelligence.

But just because software is an unknown, it doesn’t mean that hardware-driven predictions of the onset of the singularity are bound to be over-optimistic. It’s also possible they could be over-pessimistic. It’s even possible that, with the right breakthroughs in software, superhuman intelligence could be supported by present-day hardware. AI researcher Eliezer Yudkowsky of the Singularity Institute reports the result of an interesting calculation made by Geordie Rose, the CTO of D-Wave Systems, concerning software versus hardware progress:

“Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or an 1977 computer, the Apple II, running a 2007 algorithm? Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years…

“[For exploring new AI breakthroughs] I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970’s theory and a Blue Gene.”

Here’s a related example.  When we think of powerful chess-playing computers, we sometimes think that massive hardware resources will be required, such as a supercomputer provides.  However, as long ago as 1985, Psion, the UK-based company I used to work for (though not at that time), produced a piece of software that played what many people thought, at the time (and subsequently) to be a very impressive quality of chess.  See here for some discussion and some reviews.  Taking things even further, this article from 1983 describes an implementation of chess, for the Sinclair ZX-81, in only 672 bytes – which is hard to believe!  (Thanks to Mark Jacobs for this link.)

Third, building on this point, progress in AI can be described as a combination of multiple factors:

  1. Individual hardware power
  2. Compound hardware power (when many different computers are linked together, as on a network)
  3. Software algorithms
  4. Number of developers and researchers who are applying themselves to the problem
  5. The ability to take advantage of previous results (“to stand on the shoulders of giants”).

Even if the pace slows for improvements in the hardware of individual computers, it’s still very feasible for improvements in AI to take place, on account of the other factors.

Q2: Hasn’t rapid progress with AI often been foretold before, but with disappointing outcomes each time?

It’s true that some of the initial forecasts of the early AI research community, from the 1950’s, have turned out to be significantly over-optimistic.

For example, in his famous 1950 paper “Computing machinery and intelligence” – which set out the idea of the test later known as the “Turing test” – Alan Turing made the following prediction:

I believe that in about fifty years’ time it will be possible, to programme computers… to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification [between a computer answering, or a human answering] after five minutes of questioning.

Since the publication of that paper, some sixty years have now passed, and computers are still far from being able to consistently provide an interface comparable (in richness, subtlety, and common sense) to that of a human.

For a markedly more optimistic prediction, consider the proposal for the 1956 Dartmouth Summer Research Conference on Artificial Intelligence which is now seen, in retrospect, as the the seminal event for AI as a field.  Attendees at the conference included Marvin Minsky, John McCarthy, Ray Solomonoff, and Claude Shannon.  The group came together with the following vision:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

The question for us today is: what reason is there to expect rapid progress with AI in (say) the next ten years, given that similar expectations in the past failed – and, indeed, the whole field eventually fell into what is known as an “AI winter“?

J Storrs Hall has some good answers to this question.  They include the following:

First, AI researchers in the 1950’s and 60’s laboured under a grossly over-simplified view of the complexity of the human mind.  This can be seen, for example, from another quote from Turing’s 1950 paper:

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer’s. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.

Progress in brain sciences in the intervening years has highlighted very significant innate structure in the child brain.  A child brain is far from being a blank notebook.

Second, early researchers were swept along on a wave of optimism from some apparent early successes.  For example, consider the “ELIZA” application that mimicked the responses of a certain school of psychotherapist, by following a series of simple pattern-matching rules.  Lay people who interacted with this program frequently reported positive experiences, and assumed that the computer really was understanding their issues.  Although the AI researchers knew better, at least some of them may have believed that this effect showed that more significant results were just around the corner.

Third, the willingness of funding authorities to continue supporting general AI research became stretched, due to the delays in producing stronger results, and due to other options for how that research funds should be allocated.  For example, the Lighthill Report (produced in the UK in 1973 by Professor James Lighthill – whose lectures in Applied Mathematics at Cambridge I enjoyed many years later) gave a damning assessment:

The report criticized the utter failure of AI to achieve its “grandiose objectives.” It concluded that nothing being done in AI couldn’t be done in other sciences. It specifically mentioned the problem of “combinatorial explosion” or “intractability”, which implied that many of AI’s most successful algorithms would grind to a halt on real world problems and were only suitable for solving “toy” versions…

The report led to the dismantling of AI research in Britain. AI research continued in only a few top universities (Edinburgh, Essex and Sussex). This “created a bow-wave effect that led to funding cuts across Europe”

There were similar changes in funding climate in the US, with changes of opinion within DARPA.

Shortly afterwards, the growth of the PC and general IT market provided attractive alternative career targets for many of the bright researchers who might previously have considered devoting themselves to AI research.

To summarise, the field suffered an understandable backlash against its over-inflated early optimism and exaggerated hype.

Nevertheless, there are grounds for believing that considerable progress has taken place over the years.  The middle chapters of the book by J Storrs Hall provides the evidence.  The Wikipedia article on “AI winter” covers (much more briefly) some of the same material:

In the late ’90s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes. Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.” Rodney Brooks adds “there’s this stupid myth out there that AI has failed, but AI is around you every second of the day.”

Technologies developed by AI researchers have achieved commercial success in a number of domains, such as machine translation, data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosis and Google’s search engine…

Many of these domains represent aspects of “narrow” AI rather than “General” AI (sometime called “AGI”).  However, they can all contribute to overall progress, with results in one field being available for use and recombination in other fields.  That’s an example of point 5 in my previous list of the different factors affecting progress in AI:

  1. Individual hardware power
  2. Compound hardware power (when many different computers are linked together, as on a network)
  3. Software algorithms
  4. Number of developers and researchers who are applying themselves to the problem
  5. The ability to take advantage of previous results (“to stand on the shoulders of giants”).

On that note, let’s turn to the fourth factor in that list.

Q3: Isn’t AI now seen as a relatively uninteresting field, with few incentives for people to enter it?

The question is: what’s going to cause bright researchers to devote sufficient time and energy to progressing AI – given that there are so many other interesting and rewarding fields of study?

Part of the answer is to point out that the potential number of people working in this field is, today, larger than ever before – simply due to the rapid increase in the number of IT-literate graduates around the world.  Globally, there are greater numbers of science and engineering graduates from universities (including China and India) than ever before.

Second, here are some particular pressing challenges and commercial opportunities, which make it likely that further research will actually take place on AI:

  • The “arms race” between spam detection systems (the parts of forms that essentially say, “prove you are a human, not a bot”) and ever-cleverer spam detection evasive systems;
  • The need for games to provide ever more realistic “AI” features for the virtual characters in these games (games players and games writers unabashedly talk about the “AI” elements in these games);
  • The opportunity for social networking sites to provide increasingly realistic virtual companions for users to interact with (including immersive social networking sites like “Second Life”);
  • The constant need to improve the user experience of interacting with complex software; arguably the complex UI is the single biggest problem area, today, facing many mobile applications;
  • The constant need to improve the interface to large search databases, so that users can more quickly find material.

Since there is big money to be made from progressing solutions in each of these areas, we can assume that companies will be making some significant investments in the associated technology.

There’s also the prospect of a “tipping point” once some initial results demonstrate the breakthrough nature of some aspects of this field.  As J Storrs Hall puts it (in the “When” chapter of his book):

Once a baby [artificial] brain does advance far enough that it has clearly surpassed the bootstrap fallacy point… it might affect AI like the Wright brothers’ [1908] Paris demonstrations of their flying machines did a century ago.  After ignoring their successful first flight for years, the scientific community finally acknowleged it.  Aviation went from a screwball hobby to the rage of the age and kept that cachet for decades.  In particular, the amount of development took off enormously.  If we can expect a faint echo of that from AI, the early, primitive general learning systems will focus research considerably and will attract a lot of new resources.

Not only are there greater numbers of people potentially working on AI now, than ever before; they each have much more powerful hardware resources available to them.  Experiments with novel algorithms that previously would have tied up expensive and scarce supercomputers can nowadays be done on inexpensive hardware that is widely available.  (And once interesting results are demonstrated on low-powered hardware, there will be increased priority of access for variants of these same ideas to be run on today’s supercomputers.)

What’s more, the feedback mechanisms of general internet connectivity (sharing of results and ideas) and open source computing (sharing of algorithms and other source code) mean that each such researcher can draw upon greater resources than before, and participate in stronger collaborative projects.  For example, people can choose to participate in the “OpenCog” open source AI project.]

Appendix: Further comments on the book “Beyond AI”

As well as making a case that progress in AI has been significant, another of the main theme of J Storrs Hall’s book “Beyond AI: Creating the conscience of the machine” is the question of whether hyperhuman AIs would be more moral than humans as well as more intelligent.

The conclusion of his argument is, yes, these new brains will probably have a higher quality of ethical behaviour than humans have generally exhibited.  The final third of his book covers that topic, in a generally convincing way: he has a compelling analysis of topics such as free-will, self-awareness, conscious introspection, and the role of ethical frameworks to avoid destructive aspects of free-riders.  However, critically, it all depends on how these great brains are set up with regard to core purpose, and there are no easy answers.

Roko Mijic will be addressing this same topic in the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?” that it being held on Saturday 23rd January.  (If you use Facebook, you can RSVP here to indicate whether you’re coming.  NB it’s entirely optional to RSVP.)

29 November 2009

The single biggest problem

Filed under: green, solar energy, UKH+, vision — David Wood @ 2:35 pm

Petra Söderling, my good friend and former colleague on the Symbian Foundation launch team, raises some important questions in a blogpost yesterday, Transhumans H+.  Petra remarked on the fact that I had included the text “UKH+ meetings secretary” on my new business card.  A TV program she watched recently had reminded her of the topic of transhumanism (often abbreviated to H+ or h+) – prompting her blogpost:

…I haven’t changed my mind, David. I still think this is not pressingly important or urgent. In my view, the single biggest problem we have at hand is that people are breeding like rabbits, and the planet cannot feed us all. Us rich westerners consume so much natural resources that just supporting our lifestyle would be a burden. But, we are not only idiots in our own consumption manners, we are idiots in showing the rest of the world that this is the preferred lifestyle. Our example leads to billions of people in developing and underdeveloped countries pursuing our way of living. This is done by unprecedented exploitation of resources everywhere.

We’re in a process of eating our home planet away, and helping the richest of us to live healthier and longer is no solution. What’s the point of living 150 years if you’re breathing manufactured air, all migrated to north and south poles from desert lands, and eating tomatos that are clone of a clone of a clone of a clone of a clone? As rich and clever as we are, I think we should solve first things first…

The mention of “first things first” and “single biggest problem” is music to my ears.  I’m currently engaged on a personal research program to try to clarify what, for me, should be the “first things” that deserve my own personal focus.  Having devoted the last 21 years of my work life to mobile software, particularly for smartphones, I’m now looking to determine where I should apply my skills and resources for the next phase of my professional life.

I completely agree with Petra that the current “western consumer lifestyle” is not sustainable.  As more and more people throughout the developing world adopt similar lifestyles, consuming more and more resources, the impact on our planet is becoming collosal.  It’s a very high priority to address this lack of sustainability.

But is the number of people on the planet – our population – the most important leverage point, to address this lack of sustainability?  There are at least four factors to consider:

  1. World population
  2. The resource consumption of the average person on the planet
  3. The outcome of processes for creating resources
  4. Side-effects of processes for creating resources.

Briefly, we are in big trouble if (1.)x(2.) exceeds (3.), and/or if the side-effects (4.) are problematic in their own right.

My view is that the biggest leverage will come from addressing factors (3.) and (4.), rather than (1.) and (2.).

For example, huge amounts of energy from the sun are hitting the earth the whole time.  To quote from chapter 25 of David MacKay’s first-class book “Sustainable energy without the hot air“,

…the correct statement about power from the Sahara is that today’s [global energy] consumption could be provided by a 1000 km by 1000 km square in the desert, completely filled with concentrating solar power. That’s four times the area of the UK. And if we are interested in living in an equitable world, we should presumably aim to supply more than today’s consumption. To supply every person in the world with an average European’s power consumption (125 kWh/d), the area required would be two 1000 km by 1000 km squares in the desert…

In parallel with thoughtfully investigating this kind massive-scale solar energy harvesting, it also makes sense to thoughtfully investigate massive-scale CO2 removal from the atmosphere (the topic of a blogpost I plan to write shortly) as well as other geo-engineering initiatives.  In line with the transhumanist philoosophy I espouse, I’m keen to

support and encourage the thoughtful development and application of technology to significantly enhance human mental and physical capabilities – with profound possible consequences on both personal and global scales

There are, of course, large challenges facing attempts to create massive-scale solar energy harvesting and massive-scale CO2 removal from the atmosphere.  These challenges span technology, politics, economics, and, dare I say it, philosophy.

In a previous posting, The trend beyond green, I”ve spelt out some desired changes in mindset that I see as required, on a global scale:

  • rather than decrying technology as “just a technical fix”, we must be willing to embrace the new resources and opportunities that these technologies make available;
  • rather than seeking to somehow reverse human lifestyle and aspiration to that of a “simpler” time, we must recognise and support the deep and valid interests in human enhancements;
  • rather than thinking of death and decay as something that gives meaning to life, we must recognise that life reaches its fullest meaning and value in the absence of these scourges;
  • rather than seeing the status quo as somehow the pinnacle of existence, we must recognise the deep drawbacks in current society and philosophies, and be prepared to move forwards;
  • rather than seeing “natural” as somehow akin to “the best imaginable”, we must be prepared to engineer solutions that are “better than natural”;
  • rather than seeking to limit expectations, with comments such as “this kind of enhancements might become possible in 100-200 years time”, we should recognise the profound possible synergies arising from the interplay of technologies that are individually accelerating and whose compound impact can be much larger.

Helping to accelerate these changes in mindset is one of the big challenges I’d like to adopt, in the next phase of my professional life.

Whatever course society adopts, to address our sustainability crisis, there will need to be some very substantial changes.  People embrace change much more willingly, if they see upside as well as downside in the change.  The H+ vision of the future I see is one of abundance (generated by the super-technology of the near future) along with societal harmony (peaceful coexistence) and ample opportunities for new growth and exploration.

To return in closing to the question raised earlier: what is the “single biggest problem” that most deserves our collective attention?  Is it population growth and demographics, global warming, shortage of energy, the critical instability of the world economic order, the potential for a new global pandemic, nuclear terrorism, or some other global existential risk?

In a way, the answer is “none of the above”.  Rather, the single biggest problem is that, globally, we are unable to collaborate sufficiently deeply and productively to develop and deploy solutions to the above issues.  This is a second-level problem.  The economic, political, and philosophical structures we have inherited from the past have very many positive aspects, but many drawbacks as well – drawbacks that are becoming ever more pressing as we see accelerating change in technology, resource usage, and communications.

16 November 2009

Essays on unlimited lifespans

Filed under: aging, UKH+ — David Wood @ 1:27 am

In a couple of weekend’s time, on Saturday 28th November, I’ll be chairing a UKH+ meeting,

  • Successes and challenges en route to unlimited human lifespans: Q&A on the Immortality Institute

The main speaker at the event will be Shannon Vyff, Chair of the strikingly-named “Immortality Institute” – which describes its purpose on its website as “advocacy and research for unlimited lifespans”.  I’ve briefly met Shannon a couple of times at conferences, and found her to be articulate and well-informed.  Earlier this year, I read and enjoyed the book Shannon wrote primarily for teenage readers, “21st century kids: a trip from the future to you” (see here for my review).

To prepare myself for the meeting on 28th November, I’ve started reading another book: “The scientific conquest of death: essays on infinite lifespans“.  This book is published by the Immortality Institute and consists of a series of essays by 19 different authors (including a chapter by Shannon).

Here’s an extract from the introduction to the book:

The mission of the Immortality Institute is to conquer the blight of involuntary death. Some would consider this goal as scientifically impossible. Some would regard it as hubris…

Is it possible that scientists – or at least humankind – will “conquer the blight of involuntary death?” If so, to what extent will we succeed? What is in fact possible today, and what do the experts predict for the future? Is such a thing as ‘immortality’ feasible? Moreover, is it desirable? What would it mean from a political, social, ethical and religious perspective?  This book will help to explore these questions…

How would this book be special? After careful consideration, the answer seemed clear: This should be the first truly multidisciplinary approach to the topic. We would discuss not only biological theories of aging, but also biomedical strategies to counter it. Moreover, we would consider alternative approaches such as medical nanotechnology, digitalization of personhood, and cryobiological preservation. But this would only be part of the whole.

We also wanted to tackle some of the questions that are usually left unanswered in the last chapter of scientific books: If we accept that radical life extension is a real scientific possibility, then where does that leave us? Would it create overpopulation, stagnation and perpetual boredom? How would it change our society, our culture, our values and our spirituality? If science allows us to vastly extend our life span, should we do so?

I plan to write another blogpost once I’m further through the book.

In the meantime, I’d like to share a comment I made a few months back on the online letter pages of The Times.  I was writing in response to a leader article “Live For Ever: The promise of more and more life will bring us all problems“, and in particular, to answer a question posed to me by another correspondent.  Here’s my reply:

To answer your question, what do I personally see as the benefits of extending healthy human lifespan?

In short, life is good. Healthy, vibrant life is particularly good. While I have so many things I still look forwards to doing, I don’t want my life to end.

For example, I’d like to be able to share in the wonder and excitement of the scientific, engineering, artistic, and cultural improvements all throughout the present century – especially the development of “friendly super AI”. I’d like to have the time to explore many more places in the world, read many more books, learn much more mathematics, play golf on all the fine courses people talk about, and develop and deepen relations with wonderful people all over the world. I’d like to see and help my grandchildren to grow up, and their grandchildren to grow up.

Extending healthy lifespan will also have the benefit that the living wisdom and creativity of our elders will continue to be available to guide the rest of us through challenges and growth, rather than extinguishing.

In summary, I want to be alive and to actively participate when humankind moves to a higher level of consciousness, opportunity, accomplishment, wisdom, and civilisation – when we can (at last) systematically address the dreadful flaws that have been holding so many people back from their true potential.

I believe that most people have similar aspirations, but they learn to suppress them, out of a view that they are impractical. But science and engineering are on the point of making these aspirations practical, and we need new thinking to guide us through this grand, newly feasible opportunity.

I expect to revisit these topics during the meeting on 28th November.  I’m looking to gather a series of key questions that will highlight the core issues.

2 November 2009

Halloween nightmare scenario, early 2020’s

Filed under: AGI, friendly AI, Singularity, UKH+, UKTA — David Wood @ 5:37 pm

On the afternoon of Halloween 2009, Shane Legg ran through a wide-ranging set of material in his presentation “Machine Super Intelligence” to an audience of 50 people at the UKH+ meeting in Birkbeck College.

Slide 43 of 43 was the climax.  (The slides are available from Shane’s website, where you can also find links to YouTube videos of the event.)

It may be unfair of me to focus on the climax, but I believe it deserves a lot of attention.

Spoiler alert!

The climactic slide was entitled “A vision of the early 2020’s: the Halloween Scenario“.  It listed three assumptions about what will be the case by the early 2020’s, drew two conclusions, and then highlighted one big problem.

  1. First assumption – desktop computers with petaflop computing power will be widely available;
  2. Second assumption – AI researchers will have established powerful algorithms that explain and replicate deep belief networks;
  3. Brain reinforcement learning will be fairly well understood.

The first assumption is a fairly modest extrapolation of current trends in computing, and isn’t particularly contentious.

The second assumption was, in effect, the implication of around the first 30 slides of Shane’s talk, taking around 100 minutes of presentation time (interspersed with lots of audience Q&A, as typical at UKH+ meetings).  People can follow the references from Shane’s talk (and in other material on his website) to decide whether they agree.

For example (from slides 25-26), an implementation of a machine intelligence algorithm called MC-AIXI can already learn to solve or play:

  • simple prediction problems
  • Tic-Tac-Toe
  • Paper-Scissors-Rock (a good example of a non-deterministic game)
  • mazes where it can only see locally
  • various types of Tiger games
  • simple computer games, e.g. Pac-Man

and is now being taught to learn checkers (also known as draughts).  Chess will be the next step.  Note that this algorithm does not start off with the rules of best practice for these games built in (that is, it is not a specific AI program), but it can work out best practice for these games from its general intelligence.

The third assumption was the implication of the remaining 12 slides, in which Shane described (amongst other topics) work on something called “restricted Boltzmann machines“.

As stated in slide 38, on brain reinforcement learning (RL):

This area of research is currently progressing very quickly.

New genetically modified mice allow researchers to precisely turn on and off different parts of the brain’s RL system in order to identify the functional roles of the parts.

I’ve asked a number of researchers in this area:

  • “Will we have a good understanding of the RL system in the brain before 2020?”

Typical answer:

  • “Oh, we should understand it well before then. Indeed, we have a decent outline of the system already.”

Adding up these three assumptions, the first conclusion is:

  • Many research groups will be working on brain-like AGI architectures

The second conclusion is that, inevitably:

  • Some of these groups will demonstrate some promising results, and will be granted access to the super-computers of the time – which will, by then, be exaflop.

But of course, it’s when some almost human-level AGI algorithms, on petaflop computers, are let loose on exaflop supercomputers, that machine super intelligence might suddenly come into being – with results that might be completely unpredictable.

On the other hand, Shane observes that people who are working on the program of Friendly AI do not expect to have made significant progress in the same timescale:

  • By the early 2020’s, there will be no practical theory of Friendly AI.

Recall that the goal of Friendly AI is to devise a framework for AI research that will ensure that any resulting AIs have a very high level of safety for humanity no matter how super-intelligent they may become.  In this school of thought, after some time, all AI research would be constrained to adopt this framework, in order to avoid the risk of a catastrophic super-intelligence explosion.  However, at the end of Shane’s slides, the likelihood appears that the Friendly AI framework won’t be in place by the time we need it.

And that’s the Halloween nightmare scenario.

How should we respond to this scenario?

One response is to seek to somehow transfer the weight of AI research away from other forms of AGI (such as MC-AIXI) into Friendly AI?  This appears to be very hard, especially since research proceeds independently, in many different parts of the world.

A second response is to find reasons to believe that the Friendly AI project will have more time to succeed – in order words, reasons to believe that AGI will take longer to materialise than the date of the 2020’s mentioned above.  But given the progress that appears to be happening, that seems to me a reckless course of action.

Footnote: If anyone thinks they can make a good presentation on the topic of Friendly AI to a forthcoming UKH+ meeting, please get in touch!

« Newer Posts

Blog at WordPress.com.