dw2

9 April 2012

Six weeks without Symbian

Filed under: Accenture, Android, Apple, applications, Psion, Samsung, smartphones, Symbian, UIQ — David Wood @ 10:58 am

It’s only six weeks, but in some ways, it feels like six months. That’s how much time has passed since I’ve used a Symbian phone.

These six weeks separate me from nearly thirteen years of reliance on a long series of different Symbian phones. It was mid-1999 when prototype Ericsson R380 smartphones became stable enough for me to start using as my regular mobile phone. Since then, I’ve been carrying Symbian-powered smartphones with me at all times. That’s thirteen years of close interaction with various Symbian-powered devices from Nokia, Ericsson (subsequently Sony Ericsson), and Samsung – interspersed with shorter periods of using Symbian-powered devices from Panasonic, Siemens, Fujitsu, Sendo, Motorola, and LG.

On occasion over these years, I experimented with devices running other operating systems, but my current Symbian device was never far away, and remained my primary personal communication device. These non-Symbian devices always left me feeling underwhelmed – too much functionality was missing, or was served up in what seemed sub-optimal ways, compared to what I had learned to expect.

But ahead of this year’s Mobile World Congress in Barcelona, held 27th Feb to 1st Mar, I found three reasons to gain a greater degree of first-hand experience with Android:

  1. I would be meeting representatives of various companies who were conducting significant development projects using Android, and I wished to speak from “practical knowledge” rather than simply from “book knowledge”
  2. Some of my colleagues from Accenture had developed apps for Android devices, that I wanted to be able to demonstrate with confidence, based on my own recurring experience of these apps
  3. One particular Android device – the Samsung Galaxy Note – seemed to me to have the potential to define a disruptive new category of mobile usage, midway between normal smartphones and tablets, with its radically large (5.3″) screen, contained in a device still light enough and small enough to be easily portable in my shirt-top pocket.

I was initially wary about text entry on the Galaxy Note. My previous encounters with Android devices had always left me frustrated when trying to enter data, without the benefits of a QWERTY keyboard (as on my long-favourite Nokia E6 range of devices), or fluid hand-writing recognition (as on the Sony Ericsson P800/P900/P910).

But in the course of a single day, three separate people independently recommended me to look at the SwiftKey text entry add-on for Android. SwiftKey takes advantage of both context and personal history to predict what the user is likely to be typing into a given window on the device. See this BBC News interview and video for a good flavour of what SwiftKey provides. I installed it and have been using it non-stop ever since.

With each passing day, I continue to enjoy using the Galaxy Note, and to benefit from the wide ecosystem of companies who create applications for Android.

Here’s some of what I really like about the device:

  • The huge screen adds to the pleasure of browsing maps (including “street view”), web pages, and other graphic, video, or textual content
  • Time and again, there are Android apps available that tailor the mobile user experience more closely than web-browsing alone can achieve – see some examples on the adjacent screenshot
  • These apps are easy to find, easy to install, and (in general) easy to use
  • Integration with Google services (Mail, Maps, etc) is impressive
  • I’ve grown to appreciate the notification system, the ubiquitous “back” button, and the easy configurability of the device.

On the other hand, I’m still finding lots of niggles, in comparison with devices I’ve used previously:

  • It’s hard to be sure, but it seems likely to me that I get a working network connection on the device less often than on previous (e.g. Nokia) devices. This means for example that, when people try to ring me, it goes through to my voice mail more often than before, even though my phone appears (to my eyes) to be working. I’m finding that I reboot this device more often than previous devices, to re-establish a working network connection
  • I frequently press the “back” button by accident, losing my current context, for example when turning the phone from portrait to landscape; in those moments, I often silently bemoan the lack of a “forward” button
  • The device is not quite capable of one-handed use – that’s probably an inevitable consequence of having such a large screen
  • Although integration with Google services is excellent, integration with Outlook leaves more to be desired – particularly interaction with email notifications of calendar invites. For example, I haven’t found a way of accepting a “this meeting has been cancelled” notification (in a way that removes the entry from my calendar), nor of sending a short note explaining my reason for declining a given meeting invite, along with the decline notification, etc
  • I haven’t gone a single day without needing to recharge the device part-way through. This no doubt reflects my heavy use of the device. It may also reflect my continuing use of the standard Android web browser, whereas on Symbian devices I always quickly switched to using the Opera browser, with its much reduced data transfer protocols (and swifter screen refreshes)
  • Downloaded apps don’t always work as expected – perhaps reflecting the diversity of Android devices, something that developers often remark about, as a cause of extra difficulty in their work.

Perhaps what’s most interesting to me is that I keep on enjoying using the device despite all these niggles. I reason to myself that no device is perfect, and that several of the issues I’ve experienced are problems of success rather than problems of failure. And I continue to take pleasure out of interacting with the device.

This form factor will surely become more and more significant. Up till now, Android has made little market headway with larger tablets, as reported recently by PC World:

Corporations planning tablet purchases next quarter overwhelmingly voted for Apple’s iPad, a research firm said Tuesday [13th March]

Of the 1,000 business IT buyers surveyed last month by ChangeWave Research who said they would purchase tablets for their firms in the coming quarter, 84% named the iPad as an intended selection.

That number was more than ten times the nearest competitor and was a record for Apple.

However, Samsung’s success with the “phablet” form factor (5 million units sold in less than two months) has the potential to redraw the market landscape again. Just as the iPad has impacted people’s use of laptops (something I see every day in my own household), the Galaxy Note and other phablets have the potential to impact people’s use of iPads – and perhaps lots more besides.

Footnote 1: The Galaxy Note is designed for use by an “S Pen Stylus”, as well as by finger. I’ve still to explore the benefits of this Stylus.

Footnote 2: Although I no longer carry a Symbian smartphone with me, I’m still utterly reliant on my Psion Series 5mx PDA, which runs the EPOC Release 5 precursor to Symbian OS. I use it all the time as my primary Agenda, To-do list, and repository of numerous personal word documents and spreadsheets. It also wakens me up every morning.

Footnote 3: If I put on some rosy-eyed glasses, I can see the Samsung Galaxy Note as the fulfilment of the design vision behind the original “UIQ” device family reference design (DFRD) from the early days at Symbian. UIQ was initially targeted (1997-1999, when it was still called “Quartz”) at devices having broadly the same size as today’s Galaxy Note. The idea received lots of ridicule – “who’s going to buy a device as big as that?” – so UIQ morphed into “slim UIQ” that instead targeted devices like the Sony Ericsson P800 mentioned above. Like many a great design vision, UIQ can perhaps be described as “years ahead of its time”.

3 January 2011

Some memorable alarm bugs I have known

Filed under: Apple, Psion, usability — David Wood @ 12:24 pm

Here’s how the BBC website broke the news:

iPhone alarms hit by New Year glitch

A glitch on Apple’s iPhone has stopped its built-in alarm clock going off, leaving many people oversleeping on the first two days of the New Year.

Angry bloggers and tweeters complained that they had been late for work, and were risking missing planes and trains.

My first reaction was incredulity.  How could such a first class software engineering company like Apple get such basic functionality wrong?

I remember being carefully instructed, during my early days as a young software engineer with PDA pioneer Psion, that alarms were paramount.  Whatever else your mobile device might be doing at the time – however busy or full or distracted it might be – alarms had to go off when they became due.  Users were depending on them!

For example, even if the battery was too low, when the time came, to power the audio clip that a user had selected for an alarm, Psion’s EPOC operating system would default to a rasping sound that could be played with less voltage, but which was still loud enough that the user would notice.

Further, the startup sequence of a Psion device would take care to pre-allocate sufficient resources for an alarm notifier – both in the alarm server, and in the window server that would display the alarm.  There must be no risk of running out of memory and, therefore, not being able to operate the alarm.

However, as I thought more, I remembered various alarm bugs in Psion devices.

Note: I’ve probably remembered some of the following details wrong – but I think the main gist of the stories is correct.

Insisting on sounding ALL the alarms

The first was from before I started at Psion, but was a legend that was often discussed. It applied to the alarm functionality in the Psion Organiser II.

On that device, all alarms were held in a queue, and for each alarm, there was a record of whether it had been sounded.  When the device powered up, one of the first thing it would do was to check that queue for the first alarm that had not been sounded.  If it was overdue, it should be sounded immediately.  Once that alarm was acknowledged by the user, the same process should be repeated – find the next alarm that had not been sounded…

But the snag in this system became clear when the user manually advanced the time on the device (for example, on changing timezone, or, more dramatically, restoring the correct time after a system restart).  If a user had set a number of alarms, the device would insist on playing them all, one by one.  The user had no escape!

Buffer overflow (part one)

The next story on my list came to a head on a date something like the 13th of September 1989.  The date is significant – it was the first Wednesday (the day with the longest name) with a two-digit day-in-month in September (the month with the longest name).  You can probably guess how this story ends.

At that time, Psion engineers were creating the MC400 laptop – a device that was in many ways ahead of its time.  (You can see some screenshots here – though none of these shots feature the MC Alarms application.  My contribution to that software, by the way, included the Text Processor application, as well as significant parts of the UI framework.)

On the day in question, several of the prototype MC400 devices stopped working.  They’d all been working fine over the previous month or so.  Eventually we spotted the pattern – they all had alarms due, but the text for the date overflowed the pre-allocated memory storage that had been set aside to compose that text as it was displayed on the screen.  Woops.

“The kind of bug that other operating systems can only dream about”

Some time around 1991 I made a rash statement, which entered into Psion’s in-house listing of ill-guarded comments: “This is the kind of bug that other operating systems can only dream about”.  It was another alarms bug – this time in the Psion Series 3 software system.

It arose when the user had an Agenda file on a memory card (which were known, at the time, as SSDs – Solid State Disks), but had temporarily removed the card.  When the time came to sound an alarm from the Agenda, the alarm server requested the Agenda application to tell it when the next Agenda alarm would be due.  This required the Agenda application to read data from the memory card.  Because the file was already marked as “open”, the File Server in the operating system tried to display a low-level message on the screen – similar to the “Retry, Abort, or Cancel” message that users of MS-DOS might remember.  This required action from the Window Server, but the Window Server was temporarily locked, waiting for a reply from the Alarm Server.  The Alarm Server was in turn locked, waiting for the File Server – which, alas, was waiting (as previously mentioned) for the Window Server.  Deadlock.

Well, that’s as much as I can recall at the moment, but I do remember it being said at the time that the deadlock chain actually involved five interconnecting servers, so I may have forgotten some of the subtleties.  Either way, the result was that the entire device would freeze.  The only sign of life was that the operating system would still emit keyclicks when the user pressed keys – but the Window Server was unable to process these keys.

In practice, this bug would tend to strike unsuspecting users who had opened an SSD door at the time the alarm happened to be due – even the SSD door on the other side of the device (an SSD could be inserted on each side).  The hardware was unable to read from one SSD, even if it was still in place, if the other door happened to be open.  As you can imagine, this defect took some considerable time to track down.

“Death city Arizona”

At roughly the same time, an even worse alarms-related bug was uncovered.  In this case, the only way out was a cold reset, that lost all data on internal memory.  The recipe to obtain the bug went roughly as follows:

  • Supplement the built-in data of cities and countries, by defining a new city, which would be your home town
  • Observe that the operating system created a file “World.wld” somewhere on internal memory, containing the details of all the cities whose details you had added or edited
  • Find a way to delete that file
  • Restart the device.

In those days of limited memory, every extra server was viewed as an overhead to be avoided if possible.  For this reason, the Alarm Server and the World Server coexisted inside a single process, sharing as many resources as possible.  The Alarm Server managed the queue of alarms, from all different applications, and the World Server looked after access to the set of information about cities and countries.  For fast access during system startup, the World Server stored some information about the current home city.  But if the full information about the home city couldn’t be retrieved (because, for example, the user had deleted the World.wld file), the server went into a tailspin, and crashed.  The lower level operating system, noticing that a critical resource had terminated, helpfully restarted it – with identical conclusions.  Result: the lower priority applications and servers never had a chance to start up.  The user was left staring at a blank screen.

Buffer overflow (part two)

The software that composed the text to appear on the screen, when an alarm sounded, used the EPOC equivalent of “print with formatting”.  For example, a “%d” in the text would be replaced by a numerical value, depending on other parameters passed to the function.  Here, the ‘%’ character has a special meaning.

But what if the text supplied by the user itself contains a ‘%’ character?  For example, the alarm text might be “Revision should be 50% complete by today”.  Well, in at least some circumstances, the software went looking for another parameter passed to it, where none existed.  As you can imagine, all sorts of unintended consequences could result – including memory overflows.

Alarms not sounding!

Thankfully, the bugs above were all caught by in-house testing, before the device in question was released to customers.  We had a strong culture of fierce internal testing.  The last one, however, did make it into the outside world.  It impacted users who had the temerity to do the following:

  • Enter a new alarm in their Agenda
  • Switch the device off, before it had sufficient time to complete all its processing of which alarm would be the next to sound.

This problem hit users who accumulated a lot of data in their Agenda files.  In such cases, the operating system could take a non-negligible amount of time to reliably figure out what the next alarm would be.  So the user had a chance to power down the device before it had completed this calculation.  Given the EPOC focus on keeping the device in a low-power state as much as possible, the “Off” instruction was heeded quickly – too quickly in this case.  If the device had nothing else to do before that alarm was due, and if the user didn’t switch on the device for some other reason in the meantime, it wouldn’t get the chance to work out that it should be sounding that alarm.

Final thoughts re iPhone alarms

Psion put a great deal of thought into alarms:

  • How to implement them efficiently
  • How to ensure that users never missed alarms
  • How to provide the user with a great alarm experience.

For example, when an alarm becomes due on a Psion device, the sound starts quietly, and gradually gets louder.  If the user fails to acknowledge the alarm, the entire sequence repeats, after about one minute, then after about three minutes, and so on.  When the user does acknowledge the alarm, they have the option to stop it, silence it, or snooze it.  Pressing the snooze button adds another five minutes to the time before the alarm will sound again.  Pressing it three times, therefore, adds 15 minutes, and so on.  (And as a touch of grace: if you press the snooze button enough times, it emits a short click, and resets the time delay to five minutes – useful for sleepyheads who are too tired to take a proper look at the device, but who have enough of a desire to monitor the length of the snooze!)

So it’s surprising to me that Apple, with its famous focus on user experience, seem to have given comparatively little thought to the alarms on that device.  When my wife started using an iPhone in the middle of last year, she found much in it to enchant her – but the alarms were far from delightful.  It seems that the default alarms sound only once, with a rather pathetic little noise which it is easy to miss.  And when we looked, we couldn’t find options to change this behaviour.  I guess the iPhone team has other things on its mind!

28 January 2010

The iPad: more for less?

Filed under: Apple, complacency, iPhone, strategy — David Wood @ 12:36 pm

There are plenty of reasons to be critical about the Apple iPad.  If they feel inclined, Apple’s competitors and detractors can lick their lips.

For example, an article in Gizmodo enumerates “8 Things That Suck About the iPad“:

  1. Big, Ugly Bezel
  2. No Multitasking
  3. No Cameras
  4. Touch Keyboard
  5. No HDMI Out
  6. The Name “iPad”
  7. No Flash
  8. Adapters, Adapters, Adapters (“if you want to plug anything into this, such as a digital camera, you need all sorts of ugly adapters. You need an adapter for USB for god’s sake”)
  9. It’s Not Widescreen
  10. Doesn’t Support T-Mobile 3G (“it uses microSIMs that literally no one else uses”)
  11. A Closed App Ecosystem.

(The last three items on the list were added after the article was first published.)

In similar vein, Robert Scoble reported the view of his 16 year old son: “iFail“:

  1. It isn’t compelling enough for a high school student who already has a Macintosh notebook and an iPhone.
  2. It is missing features that a high school student would like, like handwriting recognition to take notes, a camera to take pictures of the board in class (and girls), and the ability to print out documents for class.
  3. He hasn’t seen his textbooks on it yet, so the usecase of replacing heavy textbooks hasn’t shown up yet.
  4. The gaming features aren’t compelling enough for him to give up either the Xbox or the iPhone. The iPhone wins because it fits in his pocket. The Xbox wins because of Xbox live so he can play against his friends (not to mention engaging HD quality and wide variety of titles).
  5. He doesn’t like the file limitations. His friends send him videos that he can’t play in iTunes and the iPad doesn’t support Flash.
  6. It isn’t game changing like the iPhone was.

However, let’s remember that iPhone initially received a similar swathe of criticisms.  It, too, omitted lots of features that everyone took for granted would need to be part of a successful smartphone: multi-tasking, 3G, MMS, copy-and-paste…

The iPad shouldn’t be judged against existing markets.  Rather than participating in a “red ocean” that’s already swarming with active competitors, it has the chance to define and participate in an empty “blue ocean”.

  • Here, I’m using the language of W. Chan Kim and Renée Mauborgne of INSEAD.
  • Blue ocean products avoid matching existing products feature-for-feature.
  • They miss out some items completely, but, instead, deliver big time on some other points.

It’s similar to how Palm made the first commercially successful pen-based handheld computer.  In comparison to predecessors – like the Casio Zoomer, the General Magic “Magic Cap”, and (ironically) the Apple Newton – the Palm Pilot delivered much less functionality.  But what it did deliver was a delight to use.  (I made a similar point in an earlier blog posting, reviewing the growth of the iPhone market share: “Market share is no comfort“.)

This is the “less is more” philosophy.  It’s a good philosophy!

Around the world, hundreds of millions of people are saying to themselves: the iPad is not for them.  But a different, large, group of potential users are likely to be interested.

It’s early days, but it looks as if the iPad will support excellent browsing of many kinds of content – content that previously would be read in physical books, newspapers, and magazines.  That’s a big market.

What’s more, reports suggest that the iPad packs tremendous speed.  For example, John Gruber reports the following on Daring Fireball:

…the iPad is using a new CPU designed and made by Apple itself: the Apple A4. This is a huge deal. I got about 20 blessed minutes of time using the iPad demo units Apple had at the event today, and if I had to sum up the device with one word, that word would be “fast”.

It is fast, fast, fast…

I expected the screen size to be the biggest differentiating factor in how the iPad feels compared to an iPhone, but I think the speed difference is just as big a factor. Web pages render so fast it was hard to believe. After using the iPhone so much for two and a half years, I’ve become accustomed to web pages rendering (relative to the Mac) slowly. On the iPad, they seem to render nearly instantly. (802.11n Wi-Fi helps too.)

The Maps app is crazy fast. Apps launch fast. Scrolling is fast. The Photos app is fast.

…everyone I spoke to in the press room was raving first and foremost about the speed. None of us could shut up about it. It feels impossibly fast.

Speed, for the iPad, might the special extra blast of usability that the new pen interface was the iPhone.

7 January 2010

Mobiles manifesting AI

Filed under: AGI, Apple, futurist, intelligence, m2020, vision — David Wood @ 12:15 am

If you get lists from 37 different mobile industry analysts of “five game-changing mobile trends for the next decade“, how many overlaps will there be?  And will the most important ideas be found in the “bell” of the aggregated curve of predictions, or instead in the tails of the curve?

Of the 37 people who took part in the “m2020” exercise conducted by Rudy De Waele, I think I was the only person to mention either of the terms “AI” (Artificial Intelligence) or “PDA” (Personal Digital Assistant), as in the first of my five predictions for the 2010’s:

  • Mobiles manifesting AI – fulfilling, at last, the vision of “personal digital assistants”

However, there were some close matches:

  • Rich Wong predicted “Smart Agents 2.0 (thank you Patty Maes) become real; the ability to deduce/impute context from blend of usage and location data”;
  • Marshall Kirkpatrick predicted “Mobile content recommendation”;
  • Carlo Longino predicted “The mobile phone will evolve into an enabler device, carrying users’ digital identities, preferences and possessions around with them”;
  • Steve O’Hear predicted “People will share more and more personal information. Both explicit e.g. photo and video uploads or status updates, and implicit data. Location sharing via GPS (in the background) is one current example of implicit information that can be shared, but others include various sensory data captured automatically via the mobile phone e.g. weather, traffic and air quality conditions, health and fitness-related data, spending habits etc. Some of this information will be shared privately and one-to-one, some anonymously and in aggregate, and some increasingly made public or shared with a user’s wider social graph. Companies will provide incentives, both at the service level or financially, in exchange for users sharing various personal data”;
  • Robert Rice predicted “Artificial Life + Intelligent Agents (holographic personalities)”.

Of course, these predictions cover a spread of different ideas.  Here’s what I had in mind for mine:

  • Our mobile electronic companions will know more and more about us, and will be able to put that information to good use to assist us better;
  • For example, these companion devices will be able to make good recommendations (e.g. mobile content, or activities) for us, suggest corrections and improvements to what we are trying to do, and generally make us smarter all-round.

The idea is similar to what former CEO of Apple, John Sculley, often talked about, during his tenure with Apple.  From a history review article about the Newton PDA:

John Sculley, Apple’s CEO, had toyed with the idea of creating a Macintosh-killer in 1986. He commissioned two high budget video mockups of a product he called Knowledge Navigator. Knowledge Navigator was going to be a tablet the size of an opened magazine, and it would have very sophisticated artificial intelligence. The machine would anticipate your needs and act on them…

Sculley was enamored with Newton, especially Newton Intelligence, which allowed the software to anticipate the behavior of the user and act on those assumptions. For example, Newton would filter an AppleLink email, hyperlink all of the names to the address book, search the email for dates and times, and ask the user if it should schedule an event.

As we now know, the Apple Newton fell seriously short of expectation.  The performance of “intelligent assistance” became something of a joke.  However, there’s nothing wrong with the concept itself.  It just turned out to be a lot harder to implement than originally imagined.  The passage of time is bringing us closer to actual useful systems.

Many of the interfaces on desktop computers already show an intelligent understanding of what the user may be trying to accomplish:

  • Search bars frequently ask, “Did you mean to search for… instead of…?” when I misspell a search clue;
  • I’ve almost stopped browsing through my list of URL bookmarks; I just type a few characters into the URL bar and the web-browser lists websites it thinks I might be trying to find – including some from my bookmarks, some pages I visit often, and some pages I’ve visited recently;
  • It’s the same for finding a book on Amazon.com – the list of “incrementally matching books” can be very useful, even after only typing part of a book’s title;
  • And it’s the same using the Google search bar – the list of “suggested search phrases” contains, surprisingly often, something I want to click on;
  • The set of items shown in “context sensitve menus” often seems a much smarter fit to my needs, nowadays, than it did when the concept was first introduced.

On mobile, search is frequently further improved by subsetting results depending on location.  As another example, typing a few characters into the home screen of the Nokia E72 smartphone results in a list of possible actions for people whose contact details match what’s been typed.

Improving the user experience with increasingly complex mobile devices, therefore, will depend not just on clearer graphical interfaces (though that will help too), but on powerful search engines that are able to draw upon contextual information about the user and his/her purpose.

Over time, it’s likely that our mobile devices will be constantly carrying out background processing of clues, making sense of visual and audio data from the environment – including processing the stream of nearby spoken conversation.  With the right algorithms, and with powerful hardware capabilities – and provided issues of security and privacy are handled in a satisfactory way – our devices will fulfill more and more of the vision of being a “personal digital assistant”.

That’s part of what I mean when I describe the 2010’s as “the decade of nanotechnology and AI”.

5 October 2008

iWoz inspires iMAX

Filed under: Apple, books, collaboration, innovation, Psion — David Wood @ 8:57 am

Last Wednesday, Apple co-Founder Steve Wozniak addressed a gathering of several hundred business people in London’s large-format IMAX cinema, as part of a series of events organised by the London Business Forum. The theme was “Apple Innovation”. Since the IMAX is just 15 minutes walk from Symbian’s HQ, this opportunity was too good for me to miss. I hoped Wozniak’s account of Apple culture might shed some new light on the all-conquering iPhone. I was not disappointed.

Wozniak spoke for more than an hour, without slides, running through a smorgasbord of anecdotes from his own life history. It was rivetting and inspiring. Later I realised that most of the material has already been published in Wozniak’s 2006 book “iWoz: Computer geek to cult icon: How I invented the personal computer, co-founded Apple, and had fun doing it“, which was given out at the event.

I warmed to Wozniak early on in his talk, when he described one of his early experiments in software – writing a program to solve the “Knight’s tour” around a chessboard. I remembered writing a program to try to solve the same problem while at roughly the same age – and had a similar result. In my case, programs were sent off from school to the local Aberdeen University, where clerical staff typed them in and submitted them on behalf of children in neighbouring schools. This program was returned several days later with the comment that there was no output – operators had terminated it.

A few weeks later, there was a short residential course at the university for sixth form students, which I attended. I modified my program to tackle a 5×5 board instead, and was happy to see computer quickly spitting out numerous solutions. I changed the board size to 6×6 instead and waited … and waited … and after around 10 minutes, a solution was printed out. Wozniak’s experience was several years before mine. As he describes it, the computer he was using could do one million calculations a second – which sounded like a huge number. So the lack of any output from his program was a big disappointment – until he calculated that it would actually take the computer about 10^25 years to finish this particular calculation!

More than half the “iWoz” book covers Wozniak’s life pre-Apple. It’s in turn heart-warming and (when describing Wozniak’s pranks and phreaking) gob-smacking.

The episode about HP turning down the idea of the Apple I computer was particularly thought-provoking. Wozniak was working at HP before Apple was founded, and being loyal to his company (which he firmly admired for being led by engineers who in turn deeply respected other engineers) he offered them the chance to implement the ideas he had devised outside work time for what would become, in effect, the world’s first useful personal computer. Although his managers at HP showed considerable interest, they were not able to set aside their standard, well-honed processes in order to start work on what would have been a new kind of project. Wozniak says that HP turned him down five times, before he eventually resigned from the company to invest his energy full-time into Apple. It seems like a classic example of the Innovator’s Dilemma – in which even great companies can fail “by doing everything right”: their “successes and capabilities can actually become obstacles in the face of changing markets and technologies”.

Via numerous anecdotes, Wozniak describes a set of characteristics which are likely to lead to product success:

  • Technical brilliance coupled with patience and persistence. (Wozniak tells a fascinating account of how he and one helper – Randy Wigginton, at the time still at senior high school – created a brand new floppy disk drive controller in just two weeks, without any prior knowledge of disk drives);
  • A drive for simplicity of design (such as using a smaller number of parts, or a shorter algorithm) and superb efficiency of performance;
  • Users should feel an emotional attachment to the product: “Products should be obviously the best”;
  • Humanism: “The human has to be more important than the technology”.

There’s shades of the iPhone experience in all these pieces of advice – even though the book iWoz was written before the iPhone was created.

There’s even stronger shades of the iPhone experience in the following extracts from the book:

The Apple II was easy to program, in both BASIC (100 commands per second) and machine language (1M commands per second)… Within months dozens of companies started up and they were putting games on casette tape for the Apple II; these were all start-up companies, but thanks to our design and documentation, we made it easy to develop stuff that worked on our platform…

… the computer magazines had tons of Apple II product ads for software and hardware. Suddenly the Apple II name was everywhere. We didn’t have to buy an advertisement or do anything ourselves to get the name out. We were just out there, thanks to this industry of software programs and hardware devices that sprang up around the Apple II. We became the hot fad of the age, and all the magazines (even in the mainstream press) started writing great things about us. Everywhere you looked. I mean, we couldn’t buy that kind of publicity. We didn’t have to.

In this way, the Apple II quickly reached sales figures far higher than anyone had dared to predict. One other factor played a vital role:

VisiCalc was so powerful it could only run on the Apple II: only our computer had enough RAM to run it.

But sales bandwaggons can lose their momentum. The iPhone bandwaggon will falter, to the extent that other smartphones turn out to be more successful at running really valuable applications (such as, for example, applications that can run in background, in ways that aren’t possible on the iPhone).

Apple also lost some of its momentum in the less reliable Apple III product that followed the Apple II. Wozniak has no doubts about the root causes for the failure of the Apple III: “it was developed by committee, by the marketing dept”. This leads on to the disappointing advice that Wozniak gives in the final chapter of his book: “Work alone”!

Here, I part company with Wozniak. I’ve explained before my view that “design by committee” can achieve, with the right setup, some outstanding results. That was my experience inside Psion. However, I do agree that the process needs to involve some first-class product managers, who have a powerful and authentic vision for the product.

Create a free website or blog at WordPress.com.