One fundamental step to unlocking the full transformational potential of smart mobile technology is to significantly improve the usability of multi-function devices. As additional features have been added into mobile phones, there’s been a natural tendency for each new feature to detract from the overall ease of use of the device:
- It’s harder for users to locate the exact function that they wish to use at any given time;
- It’s harder for users to understand the full set of functions that are available for them to use.
This has led to feelings of frustration and disenchantment. Devices are full of powerful functionality that is under-used and under-appreciated.
Recognising this problem, companies throughout the mobile industry are exploring approaches to improving the usability of multi-function devices.
One common idea is to try to arrange all the functionality into a clear logical hierarchy. But as the number of available functions grows and grows, the result is something that is harder and harder to use, no matter how thoughtfully the functions are arranged.
A second common idea is to allow users to select the applications that they personally use the most often, and to put shortcuts to these applications onto the homescreen (start screen) of the phone. That’s a step forwards, but there are drawbacks with this as well:
- The functionality that users want to access is more fine-grained than simply picking an application. Instead, a user will often have a specific task in mind, such as “phone Mum” or “email Susie” or “check what movies are showing this evening”;
- The functionality that users want to access the most often varies depending on the context the user is in – for example, the time of day, or the user’s location;
- The UI to creating these shortcuts can be time-consuming or intimidating.
In this context, I’ve recently been looking at some technology developed by the startup company Intuitive User Interfaces. The founders of Intuitive previously held key roles with the company ART (Advanced Recognition Technologies) which was subsequently acquired by Nuance Communications.
Intuitive highlight the following vision:
Imagine a phone that knows what you need, when you need it, one touch away.
Briefly, the technology works as follows:
- An underlying engine observes which tasks the user performs frequently, and in which circumstances;
- These tasks are made available to the user via a simple top-level one-touch selection screen;
- The set of tasks in this screen vary depending on user context.
Intuitive will be showing their system, running on an Android phone, at the Mobile World Congress at Barcelona next week. Ports to other platforms are in the works.
Of course, software that tries to anticipate a user’s actions has sometimes proved annoying rather than helpful. Microsoft’s “paperclip” Office Assistant became particularly notorious:
- It was included in versions of Microsoft Office from 1997 to 2003 – with the intention of providing advice to users when it deduced that they were trying to carry out a particular task;
- It was widely criticised for being intrusive and unhelpful;
- It was excluded from later versions;
- Smithsonian magazine in 2007 called this paperclip agent “one of the worst software design blunders in the annals of computing“.
It’s down to the quality of the underlying engine whether the context-dependent suggestions provided to the user are seen as helpful or annoying. Intuitive describe the engine in their product as “using sophisticated machine learning algorithms” in order to create “a statistically driven model”. Users’ reactions to suggestions also depend on the UI of the suggestion system.
Personally, I’m sufficiently interested in this technology to have joined Intuitive’s Advisory Board. If anyone would like to explore this technology further, in meetings at Barcelona, please get in touch!
For other news about Intuitive User Interfaces, please see their website.
Interesting idea – though I wonder if this will ever really work. The idea of the “virtual assistant” – whether performed by a real human being (in a suitably low cost area) or via technology, or a mix of the two (Nuance + Spinvox) has appealed for many years. They have tended to fail – as your paperclip example proves.
I wonder if the “missing link” here is the need for a rich _emotional_ connection with the user. What I want when I’m in a good mood, and how I ask for it is important to making me feel happy with the interface. One of the successes that Apple has with it’s hardware and software designs is creating an emotional connection with it’s users – something that functional phone user interface designs haven’t achieved.
Maybe devices need “emotion sensors” to really get this right.
Comment by Matt Millar — 9 February 2010 @ 10:04 am
I’m sure you’re right. The degree to which we’re prepared to cut people (or interfaces) some slack depends on how positively we feel about them.
How a phone succeeds in “creating an emotional connection” is another blog topic in its own right. In part, attractive graphics helps (and I believe the Intuitive UI solution scores well there). In part, appearing to correctly anticipate user needs helps too (or, as we used to say at Psion, it “delights” the user by out-performing the user’s expectation).
Hmm, that could make a big difference – just like a good PA will modify his/her interaction style depending on the mood of the manager.
Emotion sensors fall, I guess, into at least two categories:
1.) Those when the user explicitly tells the device “I’m happy” or “I’m fearful” or “I’m excited”, etc –
2.) Those when the device picks up tacit clues about the user’s emotion – via smart face recognition software, etc.
Comment by David Wood — 9 February 2010 @ 1:54 pm