dw2

7 January 2010

Mobiles manifesting AI

Filed under: AGI, Apple, futurist, intelligence, m2020, vision — David Wood @ 12:15 am

If you get lists from 37 different mobile industry analysts of “five game-changing mobile trends for the next decade“, how many overlaps will there be?  And will the most important ideas be found in the “bell” of the aggregated curve of predictions, or instead in the tails of the curve?

Of the 37 people who took part in the “m2020” exercise conducted by Rudy De Waele, I think I was the only person to mention either of the terms “AI” (Artificial Intelligence) or “PDA” (Personal Digital Assistant), as in the first of my five predictions for the 2010’s:

  • Mobiles manifesting AI – fulfilling, at last, the vision of “personal digital assistants”

However, there were some close matches:

  • Rich Wong predicted “Smart Agents 2.0 (thank you Patty Maes) become real; the ability to deduce/impute context from blend of usage and location data”;
  • Marshall Kirkpatrick predicted “Mobile content recommendation”;
  • Carlo Longino predicted “The mobile phone will evolve into an enabler device, carrying users’ digital identities, preferences and possessions around with them”;
  • Steve O’Hear predicted “People will share more and more personal information. Both explicit e.g. photo and video uploads or status updates, and implicit data. Location sharing via GPS (in the background) is one current example of implicit information that can be shared, but others include various sensory data captured automatically via the mobile phone e.g. weather, traffic and air quality conditions, health and fitness-related data, spending habits etc. Some of this information will be shared privately and one-to-one, some anonymously and in aggregate, and some increasingly made public or shared with a user’s wider social graph. Companies will provide incentives, both at the service level or financially, in exchange for users sharing various personal data”;
  • Robert Rice predicted “Artificial Life + Intelligent Agents (holographic personalities)”.

Of course, these predictions cover a spread of different ideas.  Here’s what I had in mind for mine:

  • Our mobile electronic companions will know more and more about us, and will be able to put that information to good use to assist us better;
  • For example, these companion devices will be able to make good recommendations (e.g. mobile content, or activities) for us, suggest corrections and improvements to what we are trying to do, and generally make us smarter all-round.

The idea is similar to what former CEO of Apple, John Sculley, often talked about, during his tenure with Apple.  From a history review article about the Newton PDA:

John Sculley, Apple’s CEO, had toyed with the idea of creating a Macintosh-killer in 1986. He commissioned two high budget video mockups of a product he called Knowledge Navigator. Knowledge Navigator was going to be a tablet the size of an opened magazine, and it would have very sophisticated artificial intelligence. The machine would anticipate your needs and act on them…

Sculley was enamored with Newton, especially Newton Intelligence, which allowed the software to anticipate the behavior of the user and act on those assumptions. For example, Newton would filter an AppleLink email, hyperlink all of the names to the address book, search the email for dates and times, and ask the user if it should schedule an event.

As we now know, the Apple Newton fell seriously short of expectation.  The performance of “intelligent assistance” became something of a joke.  However, there’s nothing wrong with the concept itself.  It just turned out to be a lot harder to implement than originally imagined.  The passage of time is bringing us closer to actual useful systems.

Many of the interfaces on desktop computers already show an intelligent understanding of what the user may be trying to accomplish:

  • Search bars frequently ask, “Did you mean to search for… instead of…?” when I misspell a search clue;
  • I’ve almost stopped browsing through my list of URL bookmarks; I just type a few characters into the URL bar and the web-browser lists websites it thinks I might be trying to find – including some from my bookmarks, some pages I visit often, and some pages I’ve visited recently;
  • It’s the same for finding a book on Amazon.com – the list of “incrementally matching books” can be very useful, even after only typing part of a book’s title;
  • And it’s the same using the Google search bar – the list of “suggested search phrases” contains, surprisingly often, something I want to click on;
  • The set of items shown in “context sensitve menus” often seems a much smarter fit to my needs, nowadays, than it did when the concept was first introduced.

On mobile, search is frequently further improved by subsetting results depending on location.  As another example, typing a few characters into the home screen of the Nokia E72 smartphone results in a list of possible actions for people whose contact details match what’s been typed.

Improving the user experience with increasingly complex mobile devices, therefore, will depend not just on clearer graphical interfaces (though that will help too), but on powerful search engines that are able to draw upon contextual information about the user and his/her purpose.

Over time, it’s likely that our mobile devices will be constantly carrying out background processing of clues, making sense of visual and audio data from the environment – including processing the stream of nearby spoken conversation.  With the right algorithms, and with powerful hardware capabilities – and provided issues of security and privacy are handled in a satisfactory way – our devices will fulfill more and more of the vision of being a “personal digital assistant”.

That’s part of what I mean when I describe the 2010’s as “the decade of nanotechnology and AI”.

29 August 2009

The human mind as a flawed creation of nature

Filed under: books, evolution, happiness, intelligence, unconscious — David Wood @ 11:38 am

I’m sharing these thoughts after finishing reading Kluge – the haphazard construction of the human mind by NYU Professor of Psychology, Gary Marcus.

I bought this book after seeing it on the recommended reading list for the forthcoming 2009 Singularity Summit.  The quote from Bertrand Russell at the top of chapter 1 gave me warm feelings towards the book as soon as I started reading:

It has been said that man is a rational animal.  All my life I have been searching for evidence which could support this.

A few days later, I’ve finished the book, still with warm feelings.

(Alas, although I’ve started at least 20 books this year, I can only remember two others that I finished – reviewed here and here.  In part, I blame the hard challenges of my work life this year for putting unusual stress and strain on my reading habits.  In part, I blame the ease-of-distraction of Twitter, for cutting into time that I would previously have spent on reading.  Anyway, it’s a sign of how readable Kluge is, that I’ve made it all the way to the end so quickly.)

I first knew the word “Kluge” as “Kludge”, a term my software engineering colleagues in Psion often used.  This book explores the history of the term, as well as its different spellings.  The definition given is as follows:

Kluge – noun, pronounced klooj (engineering): a solution that is clumsy or inelegant yet surprisingly effective.

Despite their surface effectiveness, kluges have many limitations in practice.  Engineers who have sufficient time prefer to avoid kluges, and instead to design solutions that work well under a wider range of circumstances.

The basic claim of the book is that many aspects of the human mind operate in clumsy and suboptimal ways – ways which betray the haphazard and often flawed evolutionary history of the mind.  Many of the case studies quoted are familiar to me from previous reading (eg from Jonathan Haidt’s The Happiness Hypothesis and Timothy Wilson’s Strangers to Ourselves), but Gary Marcus fits the case studies together into a different framework.

The framework is, to me, both convincing and illuminating.  It provides a battery of evidence relevant to what might be called “The Nature Delusion” – the pervasive yet often unspoken belief that things crafted by nature are inevitably optimal and incapable of serious improvement.

A good flavour of the book is conveyed by some extracts from near the end:

In this book, we’ve discussed several bugs in our cognitive makeup: confirmation bias, mental contamination, anchoring, framing, inadequate self-control, the ruminative cycle, the focussing illusion, motivated reasoning, and false memory, not to mention absent-mindedness, an ambiguous linguistic system, and vulnerability to mental disorders.  Our memory, contextually driven as it is, is ill suited to many of the demands of modern life, and our self-control mechanisms are almost hopelessly split.  Our ancestral mechanisms were shaped in a different world, and our more modern deliberative mechanisms can’t shake the influence of that past.  In every domain we have considered, from memory to belief, choice, language, and pleasure, we have seen that a mind built largely through the progressive overlay of technologies is far from perfect.  None of these aspects of human psychology would be expected from an intelligent designer; instead, the only reasonable way to interpret them is as relics, leftovers of evolution.

In a sense, the argument I have presented here is part of a long tradition.  Stephen Jay Gould‘s notion of remnants of history, a key inspiration of this book, goes back to Darwin, who started his legendary work The Descent of Man with a list of a dozen “useless, or nearly useless” features – body hair, wisdom teeth, the vestigial tail bone known as the coccyx.  Such quirks of nature were essential to Darwin’s argument.

Yet imperfections of the mind have rarely been discussed in the context of evolution…

Scientifically, every kluge contains a clue to our past; wherever there is a cumbersome solution, there is insight into how nature layered our brain together; it is no exaggeration to say that the history of evolution is a history of overlaid technologies, and kluges help expose the seams.

Every kluge also underscores what is fundamentally wrong-headed about creationism: the presumption that we are the product of an all-seeing entity.  Creationists may hold on to the bitter end, but imperfection (unlike perfection) beggars the imagination.  It’s one thing to imagine an all-knowing engineer designing a perfect eyeball, another to imagine that engineer slacking off and building a half-baked spine.

There’s a practical side too: investigations into human idiosyncrasies can provide a great deal of useful insight into the human condition.  As they say at Alcoholics Anonymous, recognition is the first step.  The more we can understand our clumsy nature, the more we can do something about it.

The final chapter of the book is entitled “True Wisdom”.  In that chapter, the author provides a list of practical suggestions for dealing with our mental imperfections.

Some of these suggestions entail changes in our education processes.  For example, I was intrigued by the description of Harry Stottlemeier’s Discovery – a book intended to help teach children skills in critical thinking:

The eponymous Harry is asked to write an essay called “The most interesting thing in the world”.  Harry, a boy after my own heart, choosing to write his on thinking.  “To me, the most interesting thing in the whole world is thinking…”

Kids of ages 10-12 who were exposed to a version of this curriculum for 16 months, for just an hour a week, showed significant gains in verbal intelligence, nonverbal intelligence, self-confidence, and independence.

The core of the final chapter is a list of 13 pieces of individual-level advice, for how we can all “do better as thinkers”, despite the kluges in our design.  Each suggestion is founded (the author says) on careful empirical research:

  1. Whenever possible, consider alternative hypotheses
  2. Reframe the question
  3. Always remember that correlation does not entail causation
  4. Never forget the size of your sample
  5. Anticipate your own impulsivity and pre-commit
  6. Don’t just set goals.  Make contingency plans
  7. Whenever possible, don’t make important decisions when you are tired or have other things on your mind
  8. Always weigh benefits against costs
  9. Imagine that your decisions may be spot-checked
  10. Distance yourself
  11. Beware the vivid, the personal, and the anecdotal
  12. Pick your spots
  13. Remind yourself frequently of the need to be rational.

You’ll need to read the book itself for further details (often thought-provoking) about each of these suggestions.

A different kind of suggestion that we can augment our own mental processes, imperfect though they are, with electronic mental processes that are much more reliable.  The book touches on that idea in places too, mentioning the author’s reliance on the memory powers of his Palm Pilot and the contacts application on a mobile phone.  I think there’s lots more to come, along similar lines.

26 April 2009

Immersed in deception

Filed under: deception, intelligence, spam — David Wood @ 2:43 pm

Over the last few weeks, I’ve received a lot of flattery and what looks like friendly advice.

Here’s an example:

Ah! This is the sort of thing I have been looking for. I’m doing some research for an article. You should add buttons to the bottom of your posts to digg, stumble, etc your content. I think this is great and want to share it, but as it stands, I’m a lazy lazy person. Just kidding!

And here’s another:

I’ve just found your blog and I really like it. This is the first time I’ve written a comment. I’m not sure what to say, but please keep up the good work!

I found these compliments while checking the comments posted in reply to my own postings – either here, on my personal blog, or on the Symbian corporate blog.

At first, I felt pleased. Then I realised I was being deceived. These comments were being placed on my blogs, simply to tempt unwary readers to click on the links in them. These links lead to sites promoting bargain basement laptops, products made from the Acai “super berry”, and numerous other wild and wacky stuff (much of it not suitable for work). Now that I’m aware of these “link bait” comments, I notice them all over the web. They’re presumably being generated automatically.

The Symbian corporate blog is hosted by WordPress and relies on a service from Akismet to sort incoming comments into “pending” and “spam”. On the whole, it does a remarkably good job. But sometimes (not too surprisingly) it gets things wrong:

  • There are false positives – genuine messages that are classified onto the spam list
  • There are false negatives – deceptive messages that are classified onto the pending queue.

The task of sorting comments becomes even harder when “linkbacks” are taken into account. By default, WordPress lists “pingbacks” and “trackbacks”, when other blogs reference one of your articles. I haven’t yet made up my mind how useful this is. But I do know that it’s another avenue for deceptive postings to get their links onto your webpage. Some of these other postings re-use text from the original posting, chopping it up to give the appearance that a human being is providing intelligent analysis of your ideas. But again, it’s now my view that these postings are being generated algorithmically, just in order to receive and harvest incoming clicks.

Companies like Akismet are clearly involved in some kind of escalating arms race. As they learn the tricks employed by one generation of spam-creating program, another generation finds ways to mask the intent more skilfully.

I guess it’s like the way human intelligence is often thought to have emerged. According to widespread opinion, early humans existing in large groups found it beneficial to be able to:

  • Deceive each other about their true intentions;
  • Pretend to be supportive of the ends of the group, but to free-ride on the support of others when they could get away with it;
  • See through the deceptive intentions of others;
  • To keep track of what person A thinks about what person B thinks about person C…

This kind of evolutionary arms race was, according to this theory, one of the causes of mushrooming human brain power.

For example, to quote from Mario Heilmann’s online paper Social evolution and social influence: selfishness, deception, self-deception:

This paper endeavors to point out that the selfish interests of individuals caused deception and countermeasures against deception to become driving forces behind social influence strategies. The expensive and wasteful nature of negotiation and impression management is a necessary and unavoidable consequence of this arms race between deception and detection.

Natural selection created genetic dispositions to deceive, and to constantly and unconsciously suspect deception attempts. In a competitive, selfish, and war-prone world, these techniques, proven in billions of years in evolution, still are optimal. Therefore they are reinforced by cultural selection and learning. Conscious awareness of deception and countermeasures is not required, often even counterproductive. This is so because conscious deception is easier to detect and carries harsher sanctions.

Humans not only deceive, but also deceive themselves and others about the fact that they deceive, into believing that they do not deceive. This double deception makes the system so watertight, that it tends to evade detection even by psychologists.

Deception may be widespread in human society, but the associated increase in brainpower has had lots of more positive side-effects. I wonder if the same will result from the rapid arms race in electronic deception and counter-deception mechanisms – and whether this will be one means for genuine electronic intelligence to emerge.

« Newer Posts

Blog at WordPress.com.