If you get lists from 37 different mobile industry analysts of “five game-changing mobile trends for the next decade“, how many overlaps will there be? And will the most important ideas be found in the “bell” of the aggregated curve of predictions, or instead in the tails of the curve?
Of the 37 people who took part in the “m2020” exercise conducted by Rudy De Waele, I think I was the only person to mention either of the terms “AI” (Artificial Intelligence) or “PDA” (Personal Digital Assistant), as in the first of my five predictions for the 2010’s:
- Mobiles manifesting AI – fulfilling, at last, the vision of “personal digital assistants”
However, there were some close matches:
- Rich Wong predicted “Smart Agents 2.0 (thank you Patty Maes) become real; the ability to deduce/impute context from blend of usage and location data”;
- Marshall Kirkpatrick predicted “Mobile content recommendation”;
- Carlo Longino predicted “The mobile phone will evolve into an enabler device, carrying users’ digital identities, preferences and possessions around with them”;
- Steve O’Hear predicted “People will share more and more personal information. Both explicit e.g. photo and video uploads or status updates, and implicit data. Location sharing via GPS (in the background) is one current example of implicit information that can be shared, but others include various sensory data captured automatically via the mobile phone e.g. weather, traffic and air quality conditions, health and fitness-related data, spending habits etc. Some of this information will be shared privately and one-to-one, some anonymously and in aggregate, and some increasingly made public or shared with a user’s wider social graph. Companies will provide incentives, both at the service level or financially, in exchange for users sharing various personal data”;
- Robert Rice predicted “Artificial Life + Intelligent Agents (holographic personalities)”.
Of course, these predictions cover a spread of different ideas. Here’s what I had in mind for mine:
- Our mobile electronic companions will know more and more about us, and will be able to put that information to good use to assist us better;
- For example, these companion devices will be able to make good recommendations (e.g. mobile content, or activities) for us, suggest corrections and improvements to what we are trying to do, and generally make us smarter all-round.
The idea is similar to what former CEO of Apple, John Sculley, often talked about, during his tenure with Apple. From a history review article about the Newton PDA:
John Sculley, Apple’s CEO, had toyed with the idea of creating a Macintosh-killer in 1986. He commissioned two high budget video mockups of a product he called Knowledge Navigator. Knowledge Navigator was going to be a tablet the size of an opened magazine, and it would have very sophisticated artificial intelligence. The machine would anticipate your needs and act on them…
Sculley was enamored with Newton, especially Newton Intelligence, which allowed the software to anticipate the behavior of the user and act on those assumptions. For example, Newton would filter an AppleLink email, hyperlink all of the names to the address book, search the email for dates and times, and ask the user if it should schedule an event.
As we now know, the Apple Newton fell seriously short of expectation. The performance of “intelligent assistance” became something of a joke. However, there’s nothing wrong with the concept itself. It just turned out to be a lot harder to implement than originally imagined. The passage of time is bringing us closer to actual useful systems.
Many of the interfaces on desktop computers already show an intelligent understanding of what the user may be trying to accomplish:
- Search bars frequently ask, “Did you mean to search for… instead of…?” when I misspell a search clue;
- I’ve almost stopped browsing through my list of URL bookmarks; I just type a few characters into the URL bar and the web-browser lists websites it thinks I might be trying to find – including some from my bookmarks, some pages I visit often, and some pages I’ve visited recently;
- It’s the same for finding a book on Amazon.com – the list of “incrementally matching books” can be very useful, even after only typing part of a book’s title;
- And it’s the same using the Google search bar – the list of “suggested search phrases” contains, surprisingly often, something I want to click on;
- The set of items shown in “context sensitve menus” often seems a much smarter fit to my needs, nowadays, than it did when the concept was first introduced.
On mobile, search is frequently further improved by subsetting results depending on location. As another example, typing a few characters into the home screen of the Nokia E72 smartphone results in a list of possible actions for people whose contact details match what’s been typed.
Improving the user experience with increasingly complex mobile devices, therefore, will depend not just on clearer graphical interfaces (though that will help too), but on powerful search engines that are able to draw upon contextual information about the user and his/her purpose.
Over time, it’s likely that our mobile devices will be constantly carrying out background processing of clues, making sense of visual and audio data from the environment – including processing the stream of nearby spoken conversation. With the right algorithms, and with powerful hardware capabilities – and provided issues of security and privacy are handled in a satisfactory way – our devices will fulfill more and more of the vision of being a “personal digital assistant”.
That’s part of what I mean when I describe the 2010’s as “the decade of nanotechnology and AI”.