dw2

15 April 2010

Accelerating automation and the future of work

Filed under: AGI, Economics, futurist, Google, politics, regulation, robots — David Wood @ 2:45 am

London is full of pleasant surprises.

Yesterday evening, I travelled to The Book Club in Shoreditch, EC2A, and made my way to the social area downstairs.  What’s your name? asked the person at the door.  I gave my name, and in return received a stick-on badge saying

Hi, I’m David.

Talk to me about the future of humanity!

I was impressed.  How do they know I like to talk to people about the future of humanity?

Then I remembered that the whole event I was attending was under the aegis of a newly formed group calling itself “Future Human“.  It was their third meeting, over the course of just a few weeks – but the first I had heard about (and decided to attend).  Everyone’s badge had the same message.  About 120 people crammed into the downstairs room – making it standing room only (since there were only around 60 seats).  Apart from the shortage of seats, the event was well run, with good use of roaming mikes from the floor.

The event started with a quick-fire entertaining presentation by author and sci-fi expert Sam Jordison.  His opening question was blunt:

What can you do that a computer can’t do?

He then listed lots of occupations from the past which technology had rendered obsolete.  Since one of my grandfathers was the village blacksmith, I found a personal resonance with this point.  It will soon be the same for many existing professions, Sam said: computers are becoming better and better at all sorts of tasks which previously would have required creative human input.  Journalism is particularly under threat.  Likewise accountancy.  And so on, and so on.

In general terms, that’s a thesis I agree with.  For example, I anticipate a time before long when human drivers will be replaced by safer robot alternatives.

I quibble with the implication that, as existing jobs are automated, there will be no jobs left for humans to do.  Instead, I see that lots of new occupations will become important.  “Shape of Jobs to Come”, a report (PDF) by Fast Future Research, describes 20 jobs that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

(See the original report for explanations of some of these unusual occupation names!)

In other words, as technology improves to remove existing occupations, new occupations will become significant – occupations that build in unpredictable ways on top of new technology.

But only up to a point.  In the larger picture, I agree with Sam’s point that even these new jobs will quickly come under the scope of rapidly improving automation.  The lifetime of occupations will shorten and shorten.  And people will typically spend fewer hours working each week (on paid tasks).

Is this a worry? Yes, if we assume that we need to work long hours, to justify our existence, or to earn sufficient income to look after our families.  But I disagree with these assumptions. Improved technology, wisely managed, should be able to result, not just in less labour left over for humans to do, but also in great material abundance – plenty of energy, food, and other resources for everyone.  We’ll become able – at last – to spend more of our time on activities that we deeply enjoy.

The panel discussion that followed touched on many of these points. The panellists – Peter Kirwan from Wired, Victor Henning from Mendeley, and Carsten Sorensen and Jannis Kallinikos from the London School of Economics – sounded lots of notes of optimism:

  • We shouldn’t create unnecessary distinctions between “human” and “machine”.  After all, humans are kinds of machines too (“meat machines“);
  • The best kind of intelligence combines human elements and machine elements – in what Google have called “hybrid intelligence“;
  • Rather than worrying about computers displacing humans, we can envisage computers augmenting humans;
  • In case computers become troublesome, we should be able to regulate them, or even to switch them off.

Again, in general terms, these are points I agree with.  However, I believe these tasks will be much harder to accomplish than the panel implied. To that extent, I believe that the panel were too optimistic.

After all, if we can barely regulate rapidly changing financial systems, we’ll surely find it even harder to regulate rapidly changing AI systems.  Before we’ve been able to work out if such-and-such an automated system is an improvement on its predecessors, that system may have caused too many rapid irreversible changes.

Worse, there could be a hard-to-estimate “critical mass” effect.  Rapidly accumulating intelligent automation is potentially akin to accumulating nuclear material until it unexpectedly reaches an irreversible critical mass.  The resulting “super cloud” system will presumably state very convincing arguments to us, for why such and such changes in regulations make great sense.  The result could be outstandingly good – but equally, it could be outstandingly bad.

Moreover, it’s likely to prove very hard to “switch off the Internet” (or “switch off Google”).  We’ll be so dependent on the Internet that we’ll be unable to disconnect it, even though we recognise there are bad consequences,

If all of this happens in slow motion, we would be OK.  We’d be able to review it and debug it in real time.  However, the lessons from the recent economic crisis is that these changes can take place almost too quickly for human governments to intervene.  That’s why we need to ensure, ahead of time, that we have a good understanding of what’s happeningAnd that’s why there should be lots more discussions of the sort that took place at Future Human last night.

The final question from the floor raised a great point: why isn’t this whole subject receiving prominence in the current UK general election debates?  My answer: It’s down to those of us who do see the coming problems to ensure that the issues get escalated appropriately.

Footnote: Regular readers will not be surprised if I point out, at this stage, that many of these same topics will be covered in the Humanity+ UK2010 event happening in Conway Hall, Holborn, London, on Saturday 24 April.  The panellists at the Future Human event were good, but I believe that the H+UK speakers will be even better!

8 April 2010

Video: The case for Artificial General Intelligence

Filed under: AGI, flight, Humanity Plus, Moore's Law, presentation, YouTube — David Wood @ 11:19 am

Here’s another short (<10 minute) video from me, building on one of the topics I’ve listed in the Humanity+ Agenda: the case for artificial general intelligence (AGI).

The discipline of having to fit a set of thoughts into a ten minute video is a good one!

Further reading: I’ve covered some of the same topics, in more depth, in previous blogposts, including:

For anyone who prefers to read the material as text, I append an approximate transcript.

My name is David Wood.  I’m going to cover some reasons for paying more attention to Artificial General Intelligence, AGI, – also known as super-human machine intelligence.  This field deserves significantly more analysis, resourcing, and funding, over the coming decade.

Machines with super-human levels of general intelligence will include hardware and software, as part of a network of connected intelligence.  Their task will be to analyse huge amounts of data, review hypotheses about this data, discern patterns, propose new hypotheses, propose experiments which will provide valuable new data, and in this way, recommend actions to solve problems or take advantage of opportunities.

If that sounds too general, I’ll have some specific examples in a moment, but the point is to create a reasoning system that is, indeed, applicable to a wide range of problems.  That’s why it’s called Artificial General Intelligence.

In this way, these machines will provide a powerful supplement to existing human reasoning.

Here are some of the deep human problems that could benefit from the assistance of enormous silicon super-brains:

  • What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
  • What are the causes of different diseases – and how can we cure them?
  • Can we predict earthquakes– and even prevent them?
  • Are there safe geo-engineering methods that will head off the threat of global warming, without nasty side effects?
  • What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
  • Which existential risks – risks that could drastically impact human civilisation – deserve the most attention?

You get the idea.  I’m sure you could add some of your own favourite questions to this list.

Some people may say that this is an unrealistic vision.  So, in answer, let me spell out the factors I see as enabling this kind of super-intelligence within the next few decades.  First is the accelerating pace of improvements in computer hardware.

This chart is from University of London researcher Shane Legg.  On a log-axis, it shows the exponentially increasing power of super-computers, all the way from 1960 to the present day and beyond.  It shows FLOPS – the number of floating point operations per second that a computer can do.  It goes all the way from kiloflops through megaflops, gigaflops, teraflops, petaflops, and is pointing towards exaflops.  If this trend continues, we’ll soon have supercomputers with at least as much computational power as a human brain.  Perhaps within less than 20 years.

But will this trend continue?  Of course, there are often slowdowns in technological progress.  Skyscraper heights and the speeds of passenger airlines are two examples.  The slowdown can sometimes be for intrinsic technical difficulties, but is more often because of lack of sufficient customer interest or public interest in even bigger or faster products.  After all, the technical skills that took mankind to the moon in 1969 could have taken us to Mars long before now, if there had been sufficient continuing public interest.

Specifically, in the case of Moore’s Law for exponentially increasing hardware power, industry experts from companies like Intel state that they can foresee at least 10 more years’ continuation of this trend, and there have plenty of ideas for innovative techniques to extend it even further.  It comes down to two things:

  • Is there sufficient public motivation in continuing this work?
  • And can some associated system integration issues be solved?

Mention of system issues brings me back to the list of factors enabling major progress with super-intelligence.  Next is improvement with software.  There’s lots of scope here.  There’s also additional power from networking ever larger numbers of computer together.  Another factor is the ever-increasing number of people with engineering skills, around the world, who are able to contribute to this area.  We have more and more graduates in relevant topics all the time.  Provided they can work together constructively, the rate of progress should increase.  We can also learn more about the structure of intelligence by analysing biological brains at ever finer levels of detail – by scanning and model-building.  Last, but not least, we have the question of motivation.

As an example of the difference that a big surge in motivation can make, consider the example of progress with another grand, historical engineering challenge – powered flight.

This example comes from Artificial Intelligence researcher J Storr Halls in his book “Beyond AI”.  People who had ideas about powered flight were, for centuries, regarded as cranks and madmen – a bit like people who, in our present day, have ideas about superhuman machine intelligence.  Finally, after many false starts, the Wright brothers made the necessary engineering breakthroughs at the start of the last century.  But even after they first flew, the field of aircraft engineering remained a sleepy backwater for five more years, while the Wright brothers kept quiet about their work and secured patent protection.  They did some sensational public demos in 1908, in Paris and in America.  Overnight, aviation went from a screwball hobby to the rage of the age and kept that status for decades.  Huge public interest drove remarkable developments.  It will be the same with demonstrated breakthroughs with artificial general intelligence.

Indeed, the motivation for studying artificial intelligence is growing all the time.  In addition to the deep human problems I mentioned earlier, we have a range of commercially-significant motivations that will drive business interest in this area.  This includes ongoing improvements in search, language translation, intelligent user interfaces, games design, and spam detection systems – where there’s already a rapid “arms race” between writers of ever more intelligent “bots” and people who seek to detect and neutralise these bots.

AGI is also commercially important to reduce costs from support call systems, and to make robots more appealing in a wide variety of contexts.  Some people will be motivated to study AGI for more philosophical reasons, such as to research ideas about minds and consciousness, to explore the possibility of uploading human consciousness into computer systems, and for the sheer joy of creating new life forms.  Last, there’s also the powerful driver that if you think a competitor may be near to a breakthrough in this area, you’re more likely to redouble your efforts.  That adds up to a lot of motivation.

To put this on a diagram:

  • We have increasing awareness of human-level reasons for developing AGI.
  • We also have maturing sub-components for AGI, including improved algorithms, improved models of the mind, and improved hardware.
  • With the Internet and open collaboration, we have an improved support infrastructure for AGI research.
  • Then, as mentioned before, we have powerful commercial motivations.
  • Adding everything up, we should see more and more people working in this space.
  • And it should see rapid progress in the coming decade.

An increased focus on Artificial General Intelligence is part of what I’m calling the Humanity+ Agenda.  This is a set of 20 inter-linked priority areas for the next decade, spread over five themes: Health+, Education+, Technology+, Society+, and Humanity+.  Progress in the various areas should reinforce and support progress in other areas.

I’ve listed Artificial General Intelligence as part of the project to substantially improve our ability to reason and learn: Education+.  One factor that strongly feeds into AGI is improvements with ICT – including improvements in both ongoing hardware and software.  If you’re not sure what to study or which field to work in, ICT should be high on your list of fields to consider.  You can also consider the broader topic of helping to publicise information about accelerating technology – so that more and more people become aware of the associated opportunities, risks, context, and options.  To be clear, there are risks as well as opportunities in all these areas.  Artificial General Intelligence could have huge downsides as well as huge upsides, if not managed wisely.  But that’s a topic for another day.

In the meantime, I eagerly look forward to working with AGIs to help address all of the top priorities listed as part of the Humanity+ Agenda.

31 January 2010

In praise of hybrid AI

Filed under: AGI, brain simulation, futurist, IA, Singularity, UKH+, uploading — David Wood @ 1:28 am

In his presentation last week at the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?“, Roko Mijic referred to the plot of the classic 1956 science fiction film “Forbidden Planet“.

The film presents a mystery about events at a planet, Altair IV, situated 16 light years from Earth:

  • What force had destroyed nearly every member of a previous spacecraft visiting that planet?
  • And what force had caused the Krell – the original inhabitants of Altair IV – to be killed overnight, whilst at the peak of their technological powers?

A 1950’s film might be expected to point a finger of blame at nuclear weapons, or other weapons of mass destruction.  However, the problem turned out to be more subtle.  The Krell had created a machine that magnified the power of their own thinking, and acted on that thinking.  So the Krells all became even more intelligent and more effective than before.  You may wonder, what’s the problem with that?

A 2002 Steven B. Harris article in the Skeptic magazine, “The return of the Krell Machine: Nanotechnology, the Singularity, and the Empty Planet Syndrome“, takes up the explanation, quoting from the film.  The Krell had created:

a big machine, 8000 cubic miles of klystron relays, enough power for a whole population of creative geniuses, operated by remote control – operated by the electromagnetic impulses of individual Krell brains… In return, that machine would instantaneously project solid matter to any point on the planet. In any shape or color they might imagine. For any purpose…! Creation by pure thought!

But … the Krell forgot one deadly danger – their own subconscious hate and lust for destruction!

And so, those mindless beasts of the subconscious had access to a machine that could never be shut down! The secret devil of every soul on the planet, all set free at once, to loot and maim! And take revenge… and kill!

Researchers at the Singularity Institute for Artificial Intelligence (SIAI) – including Roko – give a lot of thought to the general issue of unintended consequences of amplifying human intelligence.  Here are two ways in which this amplification could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

As an example of the second kind, consider the general principle that a free market economy of individuals and companies who pursue an enlightened self-interest, frequently produces goods that improve overall quality of life (in addition to generating income and profits).  However, magnifying this principle is likely to result in occasional disastrous economic crashes.  A system of computers that were programmed to maximise income and profits for their owners could, therefore, end up destroying the economy.  (This example is taken from the book “Beyond AI: Creating the Conscience of the Machine” by J. Storrs Hall.  See here for my comments on other ideas from that book.)

Another example of the second kind: a young, fast-rising leader within an organisation may be given more and more responsibility, on account of his or her brilliance, only for that brilliance to subsequently push the organisation towards failure if the general “corporate wisdom” is increasingly neglected.  Likewise, there is the risk of a new  supercomputer impressing human observers (politicians, scientists, and philosophers alike, amongst others) by the brilliance of its initial recommendations for changes in the structure of human society.  But if operating safeguards are removed (or disabled – perhaps at the instigation of the supercomputer itself) we could find that the machine’s apparent brilliance results in disastrously bad decisions in unforeseen circumstances.  (Hmm, I can imagine various writers calling for the “deregulation of the supercomputer”, in order to increase the income and profit it generates – similar to the way that many people nowadays are still resisting any regulation of the global financial system.)

That’s an argument for being very careful to avoid abdicating human responsibility for the oversight and operation of computers.  Even if we think we have programmed these systems to observe and apply human values, we can’t be sure of the consequences when these systems gain more and more power.

However, as our computer systems increase their speed and sophistication, it’s likely to prove harder and harder for comparatively slow-brained humans to be able to continue meaningfully cross-checking and monitoring the arguments raised by the computer systems in favour of specific actions.  It’s akin to humans trying to teach apes calculus, in order to gain approval from apes for how much thrust to apply in a rocket missile system targeting a rapidly approaching earth-threatening meteorite.  The computers may well decide that there’s no time to try to teach us humans the deeply complex theory that justifies whatever urgent decision they want to take.

And that’s a statement of the deep difficulty facing any “Friendly AI” program.

There are, roughly speaking, five possible ways people can react to this kind of argument.

The first response is denial – people say that there’s no way that computers will reach the level of general human intelligence within the foreseeable future.  In other words, this whole discussion is seen as being a fantasy.  However, it comes down to a question of probability.  Suppose you’re told that there’s a 10% chance that the airplane you’re about to board will explode high in the sky, with you in it.  10% isn’t a high probability, but since the outcome is so drastic, you would probably decide this is a risk you need to avoid.  Even if there’s only a 1% chance of the emergence of computers with human-level intelligence in (say) the next 20 years, it’s something that deserves serious further analysis.

The second response is to seek to stop all research into AI, by appeal to a general “precautionary principle” or similar.  This response is driven by fear.  However, any such ban would need to apply worldwide, and would surely be difficult to police.  It’s too hard to draw the boundary between “safe computer science” and “potentially unsafe computer science” (the latter being research that could increase the probability of the emergence of computers with human-level intelligence).

The third response is to try harder to design the right “human values” into advanced computer systems.  However, as Roko argued in his presentation, there is enormous scope for debating what these right values are.  After all, society has been arguing over human values since the beginning of recorded history.  Existing moral codes probably all have greater or lesser degrees of internal tension or contradiction.  In this context, the idea of “Coherent Extrapolated Volition” has been proposed:

Our coherent extrapolated volition is our choices and the actions we would collectively take if we knew more, thought faster, were more the people we wished we were, and had grown up closer together.

As noted in the Wikipedia article on Friendly Artificial Intelligence,

Eliezer Yudkowsky believes a Friendly AI should initially seek to determine the coherent extrapolated volition of humanity, with which it can then alter its goals accordingly. Many other researchers believe, however, that the collective will of humanity will not converge to a single coherent set of goals even if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”

A fourth response is to adopt emulation rather than design as the key principle for obtaining computers with human-level intelligence.  This involves the idea of “whole brain emulation” (WBE), with a low-level copy of a human brain.  The idea is sometimes also called “uploads” since the consciousness of the human brain may end up being uploaded onto the silicon emulation.

Oxford philosopher Anders Sandberg reports on his blog how a group of Singularity researchers reached a joint conclusion, at a workshop in October following the Singularity Summit, that WBE was a safer route to follow than designing AGI (Artificial General Intelligence):

During the workshop afterwards we discussed a wide range of topics. Some of the major issues were: what are the limiting factors of intelligence explosions? What are the factual grounds for disagreeing about whether the singularity may be local (self-improving AI program in a cellar) or global (self-improving global economy)? Will uploads or AGI come first? Can we do anything to influence this?

One surprising discovery was that we largely agreed that a singularity due to emulated people… has a better chance given current knowledge than AGI of being human-friendly. After all, it is based on emulated humans and is likely to be a broad institutional and economic transition. So until we think we have a perfect friendliness theory we should support WBE – because we could not reach any useful consensus on whether AGI or WBE would come first. WBE has a somewhat measurable timescale, while AGI might crop up at any time. There are feedbacks between them, making it likely that if both happens it will be closely together, but no drivers seem to be strong enough to really push one further into the future. This means that we ought to push for WBE, but work hard on friendly AGI just in case…

However, it seems to me that the above “Forbidden Planet” argument identifies a worry with this kind of approach.  Even an apparently mild and deeply humane person might be playing host to “secret devils” – “their own subconscious hate and lust for destruction”.  Once the emulated brain starts running on more powerful hardware, goodness knows what these “secret devils” might do.

In view of the drawbacks of each of these four responses, I end by suggesting a fifth.  Rather than pursing an artificial intelligence which would run separately from a human intelligence, we should explore the creation of hybrid intelligence.  Such a system involves making humans smarter at the same time as the computer systems become smarter.  The primary source for this increased human smartness is closer links with the ever-improving computer systems.

In other words, rather than just talking about AI – Artificial Intelligence – we should be pursuing IA – Intelligence Augmentation.

For a fascinating hint about the benefits of hybrid AI, consider the following extract from a recent article by former world chess champion Garry Kasparov:

In chess, as in so many things, what computers are good at is where humans are weak, and vice versa. This gave me an idea for an experiment. What if instead of human versus machine we played as partners? My brainchild saw the light of day in a match in 1998 in León, Spain, and we called it “Advanced Chess.” Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine.

Although I had prepared for the unusual format, my match against the Bulgarian Veselin Topalov, until recently the world’s number one ranked player, was full of strange sensations. Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point…

Even more notable was how the advanced chess experiment continued. In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

The terminology “Hybrid Intelligence” was used in a recent presentation at the University of Washington by Google’s VP of Research & Special Initiatives, Alfred Z. Spector.  My thanks to John Pagonis for sending me a link to a blog post by Greg Linden which in turn provided commentary on Al Spector’s talk:

What was unusual about Al’s talk was his focus on cooperation between computers and humans to allow both to solve harder problems than they might be able to otherwise.

Starting at 8:30 in the talk, Al describes this as a “virtuous cycle” of improvement using people’s interactions with an application, allowing optimizations and features like like learning to rank, personalization, and recommendations that might not be possible otherwise.

Later, around 33:20, he elaborates, saying we need “hybrid, not artificial, intelligence.” Al explains, “It sure seems a lot easier … when computers aren’t trying to replace people but to help us in what we do. Seems like an easier problem …. [to] extend the capabilities of people.”

Al goes on to say the most progress on very challenging problems (e.g. image recognition, voice-to-text, personalized education) will come from combining several independent, massive data sets with a feedback loop from people interacting with the system. It is an “increasingly fluid partnership between people and computation” that will help both solve problems neither could solve on their own.

I’ve got more to say about Al Spector’s talk – but I’ll save that for another day.

Footnote: Anders Sandberg is one of the confirmed speakers for the Humanity+, UK 2010 event happening in London on 24th April.  His chosen topic has several overlaps with what I’ve discussed above:

13 January 2010

AI: why, and when

Filed under: AGI, usability — David Wood @ 4:26 pm

Here’s a good question, raised by Paul Beardow:

One question that always rattles around in my mind is “why are we trying to recreate the human mind anyway?” We have billions of those already…

You can build something that appears to be human, but what is the point of that? Why chase an goal that doesn’t actually provide us with more than we have already?

Paul also says,

What I don’t want is AI in products so that they have their own personality, but a better understanding of my own wishes and desires in how that product should interact with me…

I personally also really don’t think that logic by itself can lead to a system that can evolve human-like imagination, feelings or personality, nor that the human mind can be reduced to being a machine. It has elementary parts, but the constant rebuilding and evolving of information doesn’t really follow any logical rules that can be programmed. The structure of the brain depends on what happens to us during the day and how we interpret it according to the situation. That defies logic most of the time and is constantly evolving and changing.

My answer: there are at least six reasons why people are pursing the goal of human-like AI.

1. Financial savings in automated systems

We’re already used to encountering automated service systems when using the phone (eg to book a cinema ticket: “I think you’re calling about Kingston upon Thames – say Yes or No”) or when navigating a web form or other user interface.  These systems provoke a mixture of feelings in the people who use them.  I often become frustrated, thinking it would be faster to speak directly to a “real human being”.  But on other occasions the automation works surprisingly well.

To widen the set of applicability of such systems, into more open-ended environments, will require engineering much more human-style “common sense” into these automated systems.  The research to accomplish this may cost lots of money, but once it’s working, it could enable considerable cost savings in service provision, as real human beings can be replaced in a system by smart pieces of silicon.

2. Improving game play

A related motivation is as follows: games designers want to program in human-level intelligence into characters in games, so that these artificial entities manifest many of the characteristics of real human participants.

By the way: electronic games are big money!  As the description of tonight’s RSA meeting “Why games are the 21st century’s most serious business” puts it:

Why should we be taking video games more seriously?

  • In 2008 Nintendo overtook Google to become the world’s most profitable company per employee.
  • The South Korean government will invest $200 billion into its video games industry over the next 4 years.
  • The trading of virtual goods within games is a global industry worth over $10 billion a year.
  • Gaming boasts the world’s fastest-growing advertising market.

3. Improved user experience with complex applications

As well as reducing cost, human-level AI can in principle improve the experience of users while interacting with complex applications.

Rather than users thinking, “No you stupid machine, why don’t you realise what I’m trying to do…”, they will be pleasantly surprised: “Ah yes, that was in fact what I was trying to accomplish – how did you manage to figure that out?”

It’s as Paul says:

What I … want … in products [is]… a better understanding of my own wishes and desires in how that product should interact with me

These are products with (let’s say it) much more “intelligence” than at present.  They observe what is happening, and can infer motivation.  I call this AI.

4. A test of scientific models of the human mind

A different kind of motivation for studying human-level AI is to find ways of testing our understanding of the human mind.

For example, I think that creativity can be achieved by machines, following logical rules.  (The basic rules are: generate lots of ideas, by whatever means, and then choose the ideas which have interesting consequences.)  But it is good to test this.  So, computers can be programmed to mimic the possible thought patterns of great composers, and we can decide whether the output is sufficiently “creative”.

(There’s already quite a lot of research into this.  For one starting point, see the EE Times article “Composer harnesses artificial intelligence to create music“.)

Similarly, it will be fascinating to hear the views of human-level AIs about (for example) the “Top 5 Unsolved Brain Mysteries“.

5. To find answers to really tough, important questions

The next motivation concerns the desire to create AIs with considerably greater than human-level AI.  Assuming that human-level AI is a point en route to that next destination, it’s therefore an indirect motivation for creating human-level AI.

The motivation here is to ask superAIs for help with really tough, difficult questions, such as:

  • What are the causes – and the cures – for different diseases?
  • Are there safe geoengineering methods that will head off the threat of global warming, without nasty side effects?
  • What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
  • What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
  • What is the resolution of the conflict between theories of gravity and theories of all the other elementary forces?

6. To find ways of extending human life and expanding human experience

If the above answers aren’t sufficient, here’s one more, which attracts at least some researchers to the topic.

If some theories of AI are true, it might be possible to copy human awareness and consciouness from residence in a biological brain into residence inside silicon (or other new computing substrate).  If so, then it may open new options for continued human consciousness, without having to depend on the fraility of a decaying human body.

This may appear a very slender basis for hope for significantly longer human lifespan, but it can be argued that all the other bases for such hope are equally slender, if not even less plausible.

OK, that’s enough answers for “why”.  But about the question “when”?

In closing, let me quickly respond to a comment by Martin Budden:

I’m not saying that I don’t believe that there will be advances in AI. On the contrary I believe, in the course of time, there will be real and significant advances in “general AI”. I just don’t believe that these advances will be made in the next decade.

What I’d like, at this point, is to be able to indicate some kind of provisional roadmap (also known as “work breakdown”) for when stepping stones of progress towards AGI might happen.

Without such a roadmap, it’s too difficult to decide when larger steps of progress are likely.  It’s just a matter of different people appearing to have different intuitions.

To be clear, discussions of Moore’s Law aren’t sufficient to answer this question.  Progress with the raw power of hardware is one thing, but what we need here is an estimate of progress with software.

Sadly, I’m not aware of any such breakdown.  If anyone knows one, please speak up!

Footnote: I guess the best place to find such a roadmap will be at the forthcoming “Third Conference on Artificial General Intelligence” being held in Lugano, Switzerland, on 5-8 March this year.

11 January 2010

AI, buggy software, and the Singularity

Filed under: AGI, Singularity — David Wood @ 12:00 am

I recently looked at three questions about the feasibility of significant progress with AI.  I’d like to continue that investigation, by looking at four more questions.

Q4: Given that all software is buggy, won’t this prevent the creation of any viable human-level AI?

Some people with a long involvement with software aren’t convinced that we can write software of sufficient quality that is of the complexity required for AI at the human-level (or beyond).  It seems to them that complex software is too unreliable.

It’s true that the software we use on a day-by-day basis – whether on a desktop computer, on a mobile phone, or via a web server – tends to manifest nasty bugs from time to time.  The more complex the system, the greater the likelihood of debilitating defects in the interactions between different subcomponents.

However, I don’t see this observation as ruling out the development of software that can manifest advanced AI.  That’s for two reasons:

First, different software projects vary in their required quality level.  Users of desktop software have become at least partially tolerant of defects in that software.  As users, we complain, but it’s not the end of the world, and we generally find workarounds.  As a result, manufacturers release software even though there’s still bugs in it.  However, for mission-critical software, the quality level is pushed a lot higher.  Yes, it’s harder to create software with high-reliability; but it can be done.

There are research projects underway to bring significantly higher quality software to desktop systems too.  For example, here’s a description of a Microsoft Research project, which is (coincidentally) named “Singularity”:

Singularity is a research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We are building a research operating system prototype (called Singularity), extending programming languages, and developing new techniques and tools for specifying and verifying program behavior.

Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernel’s address space.

Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.

There would be a certain irony if techniques from the Microsoft Singularity project were used to create a high-reliability AI system that in turn was involved in the Technological Singularity.

Second, even if software has defects, that doesn’t (by itself) prevent it from it from being intelligent.  After all, the human brain itself has many defects – see my blogpost “The human mind as a flawed creation of nature“.  Sometimes we think much better after a good night’s rest!  The point is that the AI algorithms can include aspects of fault tolerance.

Q5: Given that we’re still far from understanding the human mind, aren’t we bound to be a long way from creating a viable human-level AI?

It’s often said that the human mind has deeply mysterious elements, such as consciousness, self-awareness, and free will.  Since there’s little consensus about these aspects of the human mind, it’s said to be unlikely that a computer emulation of these features will arrive any time soon.

However, I disagree that we have no understanding of these aspects of the human mind.  There’s a broad consensus among many philosophers and practitioners alike, that the main operation of the human mind is well explained by one or other variant of  “physicalism”.  As the Wikipedia article on the Philosophy of Mind states:

Most modern philosophers of mind adopt either a reductive or non-reductive physicalist position, maintaining in their different ways that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, especially in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences…

Reductive physicalists assert that all mental states and properties will eventually be explained by scientific accounts of physiological processes and states. Non-reductive physicalists argue that although the brain is all there is to the mind, the predicates and vocabulary used in mental descriptions and explanations are indispensable, and cannot be reduced to the language and lower-level explanations of physical science. Continued neuroscientific progress has helped to clarify some of these issues.

The book I mentioned previously, “Beyond AI” by J Storrs Hall, devotes several chapters to filling in aspects of this explanation.

It’s true that there’s still scope for head-scratching debates on what philosopher David Chalmers calls “the hard problem of consciousness”, which has various formulations:

  • “Why should physical processing give rise to a rich inner life at all?”
  • “How is it that some organisms are subjects of experience?”
  • “Why does awareness of sensory information exist at all?”
  • “Why is there a subjective component to experience?”…

However, none of these questions, by themselves, should prevent the construction of a software system that will be able to process questions posed in natural human language, and to give high quality humanly-understandable answers.  When that happens, the system will very probably seek to convince us that it has a similar inner conscious life to the one we have.  As J Storr Halls says, we’ll probably believe it.

Q6: Is progress with narrow fields of AI really relevant to the problem of general AI?

Martin Budden comments:

I don’t consider the advances in machine translation over the past decade an advance in AI, I more consider them the result of brute force analysis on huge quantities of text. I wouldn’t consider a car that could safely drive itself along a motorway an advance in AI, rather it would be the integration of a number of existing technologies. I don’t really consider the improvement of an algorithm that does a specific thing (search, navigate, play chess) an advance in AI, since generally such an improvement cannot be used outside its narrow field of application.

My own view is that these advances do help, in the spirit of “divide and conquer”.  I see the human mind as being made up of modules, rather than being some intractable whole.  Improving ability in, for example, translating text, or in speech recognition, will help set the scene for eventual general AI.

It’s true that some aspects of the human mind will prove harder to emulate than others – such as the ability to notice and form new concepts.  It may be the case that a theoretical breakthrough with this aspect will enable much faster overall progress, which will be able to leverage the work done on other modules.

Q7: With so many unknowns, isn’t all this speculation about AI futile?

It’s true that no one can predict, with any confidence, the date at which specific breakthrough advances in general AI are likely to happen.  The best that someone can achieve is a distribution of different dates with different probabilities.

However, I don’t accept any argument that “there’s been no fundamental breakthroughs in the last sixty years, so there can’t possibly be any fundamental breakthroughs in (say) the next ten years”.  That would be an invalid extrapolation.

That would be similar to the view expressed in 1903 by the distinguished astronomer and mathematician Simon Newcomb:

“Aerial flight is one of that class of problems with which man can never cope.”

Newcomb was no fool: he had good reasons for his scepticism.  As explained in the Wikipedia article about Newcomb:

In the October 22, 1903 issue of The Independent, Newcomb wrote that even if a man flew he could not stop. “Once he slackens his speed, down he begins to fall. Once he stops, he falls as a dead mass.” In addition, he had no concept of an airfoil. His “aeroplane” was an inclined “thin flat board.” He therefore concluded that it could never carry the weight of a man. Newcomb was specifically critical of the work of Samuel Pierpont Langley, who claimed that he could build a flying machine powered by a steam engine and whose initial efforts at flight were public failures…

Newcomb, apparently, was unaware of the Wright Brothers efforts whose [early] work was done in relative obscurity.

My point is that there does not seem to be any valid fundamental reason why the functioning of a human mind cannot be emulated via software; we may be just two or three good breakthroughs away from solving the remaining key challenges.  With the close attention of many commercial interests, and with the accumulation of fragments of understanding, the chances improve of some of these breakthroughs happening sooner rather than later.

9 January 2010

Progress with AI

Filed under: AGI, books, m2020, Moore's Law, UKH+ — David Wood @ 9:47 am

Not everyone shares my view that AI is going to become a more and more important field during the coming decade.

I’ve received a wide mix of feedback in response to:

  • and my comments made in other discussion forums about the growth of AI.

Below, I list some of the questions people have raised – along with my answers.

Note: my answers below are informed by (among other sources) the 2007 book “Beyond AI: creating the conscience of the machine“, by J Storrs Hall, that I’ve just finished reading.

Q1: Doesn’t significant progress with AI presuppose the indefinite continuation of Moore’s Law, which is suspect?

There are three parts to my answer.

First, Moore’s Law for exponential improvements in individual hardware capability seems likely to hold for at least another five years, and there are many ideas for new semiconductor innovations that would extend the trend considerably further.  There’s a good graph of improvements in supercomputer power stretching back to 1960 on Shane Legg’s website, along with associated discussion.

Dylan McGrath, writing in EE Times in June 2009, reported views from iSuppli Corp that “Equipment cost [will] hinder Moore’s Law in 2014“:

Moore’s Law will cease to drive semiconductor manufacturing after 2014, when the high cost of chip manufacturing equipment will make it economically unfeasible to do volume production of devices with feature sizes smaller than 18nm, according to iSuppli Corp.

While further advances in shrinking process geometries can be achieved after the 20- to 18nm nodes, the rising cost of chip making equipment will relegate Moore’s Law to the laboratory and alter the fundamental economics of the semiconductor industry, iSuppli predicted.

“At those nodes, the industry will start getting to the point where semiconductor manufacturing tools are too expensive to depreciate with volume production, i.e., their costs will be so high, that the value of their lifetime productivity can never justify it,” said Len Jelinek, director and chief semiconductor manufacturing iSuppli, in a statement.

In other words, it remains technological possible that semiconductors can become exponentially denser even after 2014, but it is unclear that sufficient economic incentives will exist for these additional improvements.

As The Register reported the same story:

Basically, just because chip makers can keep adding cores, it doesn’t mean that the application software and the end user workloads that run on this iron will be able to take advantage of these cores (and their varied counts of processor threads) because of the difficulty of parallelising software.

iSuppli is not talking about these problems, at least not today. But what the analysts at the chip watcher are pondering is the cost of each successive chip-making technology and the desire of chip makers not to go broke just to prove Moore’s Law right.

“The usable limit for semiconductor process technology will be reached when chip process geometries shrink to be smaller than 20 nanometers (nm), to 18nm nodes,” explains Len Jelinek…

At that point, says Jelinek, Moore’s Law becomes academic, and chip makers are going to extend the time they keep their process technologies in the field so they can recoup their substantial investments in process research and semiconductor manufacturing equipment.

However, other analysts took a dim view of this pessimistic forecast, and maintain that Moore’s Law will be longer lived.  For example, In-Stat’s chief technology strategist, Jim McGregor, offered the following rebuttal:

…every new technology goes over some road-bumps, especially involving start-up costs, but these tend to drop rapidly once moved into regular production. “EUV [extreme ultraviolet] will likely be the next significant technology to go through this cycle,” McGregor told us.

McGregor did concede that the lifecycle of certain technologies is being extended by firms who are in some cases choosing not to migrate to every new process node, but he maintained new process tech is still the key driver of small design geometries, including memory density, logic density, power consumption, etc.

“Moore’s Law also improves the cost per device and per wafer,” added McGregor, who also noted that “the industry has and will continue to go through changes because of some of the cost issues.” These include the formation of process development alliances, like IBM’s alliances, the transition to foundry manufacturing, and design for manufacturing techniques like computational lithography.

“Many people have predicted the end of Moore’s Law and they have all been wrong,” sighed McGregor. The same apparently goes for those foolhardy enough to attempt to predict changes in the dynamics of the semiconductor industry.

“There have always been challenges to the semiconductor technology roadmap, but for every obstacle, the industry has developed a solution and that will continue as long as we are talking about the hundreds of billion of dollars in revenue that are generated every year,” he concluded.

In other words, it is likely that, given sufficient economic motivation, individual hardware performance will continue improving, at a significant rate (if, perhaps, not exponentially) throughout the coming decade.

Second, it remains an open question as to how much hardware would be needed, to host an Artificial (Machine) Intelligence (“AI”) that has either human-level or hyperhuman reasoning power.

Marvin Minsky, one of the doyens of AI research, has been quoted as believing that computers commonly available in universities and industry already have sufficient power to manifest human-level AI – if only we could work out how to program them in the right way.

J. Storr Hall provides an explanation:

Let me, somewhat presumptuously, attempt to explain Minsky’s intuition by an analogy: a bird is our natural example of the possibility of heavier-than-air flight. Birds are immensely complex: muscles, bones, feathers, nervous systems. But we can build working airplanes with tremendously fewer moving parts. Similarly, the brain can be greatly simplified, still leaving an engine capable of general conscious thought.

Personally, I’m a big fan of the view that the right algorithm can make a tremendous difference to a computational task.  As I noted in a 2008 blog post:

Arguably the biggest unknown in the technology involved in superhuman intelligence is software. Merely improving the hardware doesn’t necessarily mean the the software performance increases to match. As has been remarked, “software gets slower, more rapidly than hardware gets faster”. (This is sometimes called “Wirth’s Law”.) If your algorithms scale badly, fixing the hardware will just delay the point where your algorithms fail.

So it’s not just the hardware that matters – it’s how that hardware is organised. After all, the brains of Neanderthals were larger than those of humans, but are thought to have been wired up differently to ours. Brain size itself doesn’t necessarily imply intelligence.

But just because software is an unknown, it doesn’t mean that hardware-driven predictions of the onset of the singularity are bound to be over-optimistic. It’s also possible they could be over-pessimistic. It’s even possible that, with the right breakthroughs in software, superhuman intelligence could be supported by present-day hardware. AI researcher Eliezer Yudkowsky of the Singularity Institute reports the result of an interesting calculation made by Geordie Rose, the CTO of D-Wave Systems, concerning software versus hardware progress:

“Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or an 1977 computer, the Apple II, running a 2007 algorithm? Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years…

“[For exploring new AI breakthroughs] I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970’s theory and a Blue Gene.”

Here’s a related example.  When we think of powerful chess-playing computers, we sometimes think that massive hardware resources will be required, such as a supercomputer provides.  However, as long ago as 1985, Psion, the UK-based company I used to work for (though not at that time), produced a piece of software that played what many people thought, at the time (and subsequently) to be a very impressive quality of chess.  See here for some discussion and some reviews.  Taking things even further, this article from 1983 describes an implementation of chess, for the Sinclair ZX-81, in only 672 bytes – which is hard to believe!  (Thanks to Mark Jacobs for this link.)

Third, building on this point, progress in AI can be described as a combination of multiple factors:

  1. Individual hardware power
  2. Compound hardware power (when many different computers are linked together, as on a network)
  3. Software algorithms
  4. Number of developers and researchers who are applying themselves to the problem
  5. The ability to take advantage of previous results (“to stand on the shoulders of giants”).

Even if the pace slows for improvements in the hardware of individual computers, it’s still very feasible for improvements in AI to take place, on account of the other factors.

Q2: Hasn’t rapid progress with AI often been foretold before, but with disappointing outcomes each time?

It’s true that some of the initial forecasts of the early AI research community, from the 1950’s, have turned out to be significantly over-optimistic.

For example, in his famous 1950 paper “Computing machinery and intelligence” – which set out the idea of the test later known as the “Turing test” – Alan Turing made the following prediction:

I believe that in about fifty years’ time it will be possible, to programme computers… to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification [between a computer answering, or a human answering] after five minutes of questioning.

Since the publication of that paper, some sixty years have now passed, and computers are still far from being able to consistently provide an interface comparable (in richness, subtlety, and common sense) to that of a human.

For a markedly more optimistic prediction, consider the proposal for the 1956 Dartmouth Summer Research Conference on Artificial Intelligence which is now seen, in retrospect, as the the seminal event for AI as a field.  Attendees at the conference included Marvin Minsky, John McCarthy, Ray Solomonoff, and Claude Shannon.  The group came together with the following vision:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

The question for us today is: what reason is there to expect rapid progress with AI in (say) the next ten years, given that similar expectations in the past failed – and, indeed, the whole field eventually fell into what is known as an “AI winter“?

J Storrs Hall has some good answers to this question.  They include the following:

First, AI researchers in the 1950’s and 60’s laboured under a grossly over-simplified view of the complexity of the human mind.  This can be seen, for example, from another quote from Turing’s 1950 paper:

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer’s. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.

Progress in brain sciences in the intervening years has highlighted very significant innate structure in the child brain.  A child brain is far from being a blank notebook.

Second, early researchers were swept along on a wave of optimism from some apparent early successes.  For example, consider the “ELIZA” application that mimicked the responses of a certain school of psychotherapist, by following a series of simple pattern-matching rules.  Lay people who interacted with this program frequently reported positive experiences, and assumed that the computer really was understanding their issues.  Although the AI researchers knew better, at least some of them may have believed that this effect showed that more significant results were just around the corner.

Third, the willingness of funding authorities to continue supporting general AI research became stretched, due to the delays in producing stronger results, and due to other options for how that research funds should be allocated.  For example, the Lighthill Report (produced in the UK in 1973 by Professor James Lighthill – whose lectures in Applied Mathematics at Cambridge I enjoyed many years later) gave a damning assessment:

The report criticized the utter failure of AI to achieve its “grandiose objectives.” It concluded that nothing being done in AI couldn’t be done in other sciences. It specifically mentioned the problem of “combinatorial explosion” or “intractability”, which implied that many of AI’s most successful algorithms would grind to a halt on real world problems and were only suitable for solving “toy” versions…

The report led to the dismantling of AI research in Britain. AI research continued in only a few top universities (Edinburgh, Essex and Sussex). This “created a bow-wave effect that led to funding cuts across Europe”

There were similar changes in funding climate in the US, with changes of opinion within DARPA.

Shortly afterwards, the growth of the PC and general IT market provided attractive alternative career targets for many of the bright researchers who might previously have considered devoting themselves to AI research.

To summarise, the field suffered an understandable backlash against its over-inflated early optimism and exaggerated hype.

Nevertheless, there are grounds for believing that considerable progress has taken place over the years.  The middle chapters of the book by J Storrs Hall provides the evidence.  The Wikipedia article on “AI winter” covers (much more briefly) some of the same material:

In the late ’90s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes. Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.” Rodney Brooks adds “there’s this stupid myth out there that AI has failed, but AI is around you every second of the day.”

Technologies developed by AI researchers have achieved commercial success in a number of domains, such as machine translation, data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosis and Google’s search engine…

Many of these domains represent aspects of “narrow” AI rather than “General” AI (sometime called “AGI”).  However, they can all contribute to overall progress, with results in one field being available for use and recombination in other fields.  That’s an example of point 5 in my previous list of the different factors affecting progress in AI:

  1. Individual hardware power
  2. Compound hardware power (when many different computers are linked together, as on a network)
  3. Software algorithms
  4. Number of developers and researchers who are applying themselves to the problem
  5. The ability to take advantage of previous results (“to stand on the shoulders of giants”).

On that note, let’s turn to the fourth factor in that list.

Q3: Isn’t AI now seen as a relatively uninteresting field, with few incentives for people to enter it?

The question is: what’s going to cause bright researchers to devote sufficient time and energy to progressing AI – given that there are so many other interesting and rewarding fields of study?

Part of the answer is to point out that the potential number of people working in this field is, today, larger than ever before – simply due to the rapid increase in the number of IT-literate graduates around the world.  Globally, there are greater numbers of science and engineering graduates from universities (including China and India) than ever before.

Second, here are some particular pressing challenges and commercial opportunities, which make it likely that further research will actually take place on AI:

  • The “arms race” between spam detection systems (the parts of forms that essentially say, “prove you are a human, not a bot”) and ever-cleverer spam detection evasive systems;
  • The need for games to provide ever more realistic “AI” features for the virtual characters in these games (games players and games writers unabashedly talk about the “AI” elements in these games);
  • The opportunity for social networking sites to provide increasingly realistic virtual companions for users to interact with (including immersive social networking sites like “Second Life”);
  • The constant need to improve the user experience of interacting with complex software; arguably the complex UI is the single biggest problem area, today, facing many mobile applications;
  • The constant need to improve the interface to large search databases, so that users can more quickly find material.

Since there is big money to be made from progressing solutions in each of these areas, we can assume that companies will be making some significant investments in the associated technology.

There’s also the prospect of a “tipping point” once some initial results demonstrate the breakthrough nature of some aspects of this field.  As J Storrs Hall puts it (in the “When” chapter of his book):

Once a baby [artificial] brain does advance far enough that it has clearly surpassed the bootstrap fallacy point… it might affect AI like the Wright brothers’ [1908] Paris demonstrations of their flying machines did a century ago.  After ignoring their successful first flight for years, the scientific community finally acknowleged it.  Aviation went from a screwball hobby to the rage of the age and kept that cachet for decades.  In particular, the amount of development took off enormously.  If we can expect a faint echo of that from AI, the early, primitive general learning systems will focus research considerably and will attract a lot of new resources.

Not only are there greater numbers of people potentially working on AI now, than ever before; they each have much more powerful hardware resources available to them.  Experiments with novel algorithms that previously would have tied up expensive and scarce supercomputers can nowadays be done on inexpensive hardware that is widely available.  (And once interesting results are demonstrated on low-powered hardware, there will be increased priority of access for variants of these same ideas to be run on today’s supercomputers.)

What’s more, the feedback mechanisms of general internet connectivity (sharing of results and ideas) and open source computing (sharing of algorithms and other source code) mean that each such researcher can draw upon greater resources than before, and participate in stronger collaborative projects.  For example, people can choose to participate in the “OpenCog” open source AI project.]

Appendix: Further comments on the book “Beyond AI”

As well as making a case that progress in AI has been significant, another of the main theme of J Storrs Hall’s book “Beyond AI: Creating the conscience of the machine” is the question of whether hyperhuman AIs would be more moral than humans as well as more intelligent.

The conclusion of his argument is, yes, these new brains will probably have a higher quality of ethical behaviour than humans have generally exhibited.  The final third of his book covers that topic, in a generally convincing way: he has a compelling analysis of topics such as free-will, self-awareness, conscious introspection, and the role of ethical frameworks to avoid destructive aspects of free-riders.  However, critically, it all depends on how these great brains are set up with regard to core purpose, and there are no easy answers.

Roko Mijic will be addressing this same topic in the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?” that it being held on Saturday 23rd January.  (If you use Facebook, you can RSVP here to indicate whether you’re coming.  NB it’s entirely optional to RSVP.)

7 January 2010

Mobiles manifesting AI

Filed under: AGI, Apple, futurist, intelligence, m2020, vision — David Wood @ 12:15 am

If you get lists from 37 different mobile industry analysts of “five game-changing mobile trends for the next decade“, how many overlaps will there be?  And will the most important ideas be found in the “bell” of the aggregated curve of predictions, or instead in the tails of the curve?

Of the 37 people who took part in the “m2020” exercise conducted by Rudy De Waele, I think I was the only person to mention either of the terms “AI” (Artificial Intelligence) or “PDA” (Personal Digital Assistant), as in the first of my five predictions for the 2010’s:

  • Mobiles manifesting AI – fulfilling, at last, the vision of “personal digital assistants”

However, there were some close matches:

  • Rich Wong predicted “Smart Agents 2.0 (thank you Patty Maes) become real; the ability to deduce/impute context from blend of usage and location data”;
  • Marshall Kirkpatrick predicted “Mobile content recommendation”;
  • Carlo Longino predicted “The mobile phone will evolve into an enabler device, carrying users’ digital identities, preferences and possessions around with them”;
  • Steve O’Hear predicted “People will share more and more personal information. Both explicit e.g. photo and video uploads or status updates, and implicit data. Location sharing via GPS (in the background) is one current example of implicit information that can be shared, but others include various sensory data captured automatically via the mobile phone e.g. weather, traffic and air quality conditions, health and fitness-related data, spending habits etc. Some of this information will be shared privately and one-to-one, some anonymously and in aggregate, and some increasingly made public or shared with a user’s wider social graph. Companies will provide incentives, both at the service level or financially, in exchange for users sharing various personal data”;
  • Robert Rice predicted “Artificial Life + Intelligent Agents (holographic personalities)”.

Of course, these predictions cover a spread of different ideas.  Here’s what I had in mind for mine:

  • Our mobile electronic companions will know more and more about us, and will be able to put that information to good use to assist us better;
  • For example, these companion devices will be able to make good recommendations (e.g. mobile content, or activities) for us, suggest corrections and improvements to what we are trying to do, and generally make us smarter all-round.

The idea is similar to what former CEO of Apple, John Sculley, often talked about, during his tenure with Apple.  From a history review article about the Newton PDA:

John Sculley, Apple’s CEO, had toyed with the idea of creating a Macintosh-killer in 1986. He commissioned two high budget video mockups of a product he called Knowledge Navigator. Knowledge Navigator was going to be a tablet the size of an opened magazine, and it would have very sophisticated artificial intelligence. The machine would anticipate your needs and act on them…

Sculley was enamored with Newton, especially Newton Intelligence, which allowed the software to anticipate the behavior of the user and act on those assumptions. For example, Newton would filter an AppleLink email, hyperlink all of the names to the address book, search the email for dates and times, and ask the user if it should schedule an event.

As we now know, the Apple Newton fell seriously short of expectation.  The performance of “intelligent assistance” became something of a joke.  However, there’s nothing wrong with the concept itself.  It just turned out to be a lot harder to implement than originally imagined.  The passage of time is bringing us closer to actual useful systems.

Many of the interfaces on desktop computers already show an intelligent understanding of what the user may be trying to accomplish:

  • Search bars frequently ask, “Did you mean to search for… instead of…?” when I misspell a search clue;
  • I’ve almost stopped browsing through my list of URL bookmarks; I just type a few characters into the URL bar and the web-browser lists websites it thinks I might be trying to find – including some from my bookmarks, some pages I visit often, and some pages I’ve visited recently;
  • It’s the same for finding a book on Amazon.com – the list of “incrementally matching books” can be very useful, even after only typing part of a book’s title;
  • And it’s the same using the Google search bar – the list of “suggested search phrases” contains, surprisingly often, something I want to click on;
  • The set of items shown in “context sensitve menus” often seems a much smarter fit to my needs, nowadays, than it did when the concept was first introduced.

On mobile, search is frequently further improved by subsetting results depending on location.  As another example, typing a few characters into the home screen of the Nokia E72 smartphone results in a list of possible actions for people whose contact details match what’s been typed.

Improving the user experience with increasingly complex mobile devices, therefore, will depend not just on clearer graphical interfaces (though that will help too), but on powerful search engines that are able to draw upon contextual information about the user and his/her purpose.

Over time, it’s likely that our mobile devices will be constantly carrying out background processing of clues, making sense of visual and audio data from the environment – including processing the stream of nearby spoken conversation.  With the right algorithms, and with powerful hardware capabilities – and provided issues of security and privacy are handled in a satisfactory way – our devices will fulfill more and more of the vision of being a “personal digital assistant”.

That’s part of what I mean when I describe the 2010’s as “the decade of nanotechnology and AI”.

28 December 2009

Ten emerging technology trends to watch in the 2010’s

Filed under: AGI, nanotechnology, vision — David Wood @ 12:38 pm

On his “2020 science” blogAndrew Maynard of the Woodrow Wilson International Center for Scholars has published an excellent article “Ten emerging technology trends to watch over the next decade” that’s well worth reading.

To whet appetites, here’s his list of the ten emerging technologies:

  1. Geoengineering
  2. Smart grids
  3. Radical materials
  4. Synthetic biology
  5. Personal genomics
  6. Bio-interfaces
  7. Data interfaces
  8. Solar power
  9. Nootropics
  10. Cosmeceuticals

For the details, head over to the original article.

I see Andrew’s article as a more thorough listing of what I tried to cover in my own recent article, Predictions for the decade ahead, where I wrote:

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

Neither the words “nanotechnology” or “AI” appear in Andrew’s list.  Here’s what he has to say about nanotechnology:

Nanotech has been a dominant emerging technologies over the past ten years.  But in many ways, it’s a fake.  Advances in the science of understanding and manipulating matter at the nanoscale are indisputable, as are the early technology outcomes of this science.  But nanotechnology is really just a convenient shorthand for a whole raft of emerging technologies that span semiconductors to sunscreens, and often share nothing more than an engineered structure that is somewhere between 1 – 100 nanometers in scale.  So rather than focus on nanotech, I decided to look at specific technologies which I think will make a significant impact over the next decade.  Perhaps not surprisingly though, many of them depend in some way on working with matter at nanometer scales.

I think we are both right 🙂

Regarding AI, Andrew’s comments under the heading “Data interfaces” cover some of what I had in mind:

The amount of information available through the internet has exploded over the past decade.  Advances in data storage, transmission and processing have transformed the internet from a geek’s paradise to a supporting pillar of 21st century society.  But while the last ten years have been about access to information, I suspect that the next ten will be dominated by how to make sense of it all.  Without the means to find what we want in this vast sea of information, we are quite literally drowning in data.  And useful as search engines like Google are, they still struggle to separate the meaningful from the meaningless.  As a result, my sense is that over the next decade we will see some significant changes in how we interact with the internet.  We’re already seeing the beginnings of this in websites like Wolfram Alpha that “computes” answers to queries rather than simply returning search hits,  or Microsoft’s Bing, which helps take some of the guesswork out of searches.  Then we have ideas like The Sixth Sense project at the MIT Media Lab, which uses an interactive interface to tap into context-relevant web information.  As devices like phones, cameras, projectors, TV’s, computers, cars, shopping trolleys, you name it, become increasingly integrated and connected, be prepared to see rapid and radical changes in how we interface with and make sense of the web.

It looks like there’s lots of other useful material on the same blog.  I particularly like its subtitle “Providing a clear perspective on developing science and technology responsibly”.

Hat tip to @vangeest for the pointer!

24 December 2009

Predictions for the decade ahead

Before highlighting some likely key trends for the decade ahead – the 2010’s – let’s pause a moment to review some of the most important developments of the last ten years.

  • Technologically, the 00’s were characterised by huge steps forwards with social computing (“web 2.0”) and with mobile computing (smartphones and more);
  • Geopolitically, the biggest news has been the ascent of China to becoming the world’s #2 superpower;
  • Socioeconomically, the world is reaching a deeper realisation that current patterns of consumption cannot be sustained (without major changes), and that the foundations of free-market economics are more fragile than was previously widely thought to be the case;
  • Culturally and ideologically, the threat of militant Jihad, potentially linked to dreadful weaponry, has given the world plenty to think about.

Looking ahead, the 10’s will very probably see the following major developments:

  • Nanotechnology will progress in leaps and bounds, enabling increasingly systematic control, assembling, and reprogamming of matter at the molecular level;
  • In parallel, AI (artificial intelligence) will rapidly become smarter and more pervasive, and will be manifest in increasingly intelligent robots, electronic guides, search assistants, navigators, drivers, negotiators, translators, and so on.

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

We’ll see the following applications of nanotechnology and AI:

  • Energy harvesting, storage, and distribution (including via smart grids) will be revolutionised;
  • Reliance on existing means of oil production will diminish, being replaced by greener energy sources, such as next-generation solar power;
  • Synthetic biology will become increasingly commonplace – newly designed living cells and organisms that have been crafted to address human, social, and environmental need;
  • Medicine will provide more and more new forms of treatment, that are less invasive and more comprehensive than before, using compounds closely tailored to the specific biological needs of individual patients;
  • Software-as-a-service, provided via next-generation cloud computing, will become more and more powerful;
  • Experience of virtual worlds – for the purposes of commerce, education, entertainment, and self-realisation – will become extraordinarily rich and stimulating;
  • Individuals who can make wise use of these technological developments will end up significantly cognitively enhanced.

In the world of politics, we’ll see more leaders who combine toughness with openness and a collaborative spirit.  The awkward international institutions from the 00’s will either reform themselves, or will be superseded and surpassed by newer, more informal, more robust and effective institutions, that draw a lot of inspiration from emerging best practice in open source and social networking.

But perhaps the most important change is one I haven’t mentioned yet.  It’s a growing change of attitude, towards the question of the role in technology in enabling fuller human potential.

Instead of people decrying “technical fixes” and “loss of nature”, we’ll increasingly hear widespread praise for what can be accomplished by thoughtful development and deployment of technology.  As technology is seen to be able to provide unprecedented levels of health, vitality, creativity, longevity, autonomy, and all-round experience, society will demand a reprioritisation of resource allocation.  Previous sacrosanct cultural norms will fall under intense scrutiny, and many age-old beliefs and practices will fade away.  Young and old alike will move to embrace these more positive and constructive attitudes towards technology, human progress, and a radical reconsideration of how human potential can be fulfilled.

By the way, there’s a name for this mental attitude.  It’s “transhumanism”, often abbreviated H+.

My conclusion, therefore, is that the 2010’s will be the decade of nanotechnology, AI, and H+.

As for the question of which countries (or regions) will play the role of superpowers in 2020: it’s too early to say.

Footnote: Of course, there are major possible risks from the deployment of nanotechnology and AI, as well as major possible benefits.  Discussion of how to realise the benefits without falling foul of the risks will be a major feature of public discourse in the decade ahead.

7 December 2009

Bangalore and the future of AI

Filed under: AGI, Bangalore, Singularity — David Wood @ 3:15 pm

I’m in the middle of a visit to the emerging hi-tech centre of excellence, Bangalore.  Today, I heard suggestions, at the Forum Nokia Developer Conference happening here, that Bangalore could take on many of the roles of Silicon Valley, in the next phase of technology entrepreneurship and revolution.

I can’t let the opportunity of this visit pass by, without reaching out to people in this vicinity willing to entertain and review more radical ideas about the future of technology.  Some local connections have helped me to arrange an informal get-together in a coffee shop tomorrow evening (Tuesday 8th Dec), in a venue reasonably close to the Taj Residency hotel.

We’ve picked the topic “The future of AI and the possible technological singularity“.

I’ll prepare a few remarks to kick off the conversation, and we’ll see how it goes from there!

Ideas likely to be covered include:

  • “Narrow” AI versus “General” AI;
  • A brief history of progress of AI;
  • Factors governing a possible increase in the capability of general AI – hardware changes, algorithm changes, and more;
  • The possibility of a highly disruptive “intelligence explosion“;
  • The possibility of research into what has been termed “friendly AI“;
  • Different definitions of the technological singularity;
  • The technology singularity in fiction – limitations of Hollywood vision;
  • Fantasy, existential risk, or optimal outcome?
  • Risks, opportunities, and timescales?

If anyone wants to join this get-together, please drop me an email, text message, or Twitter DM, and I’ll confirm the venue.

« Newer PostsOlder Posts »

Blog at WordPress.com.