dw2

22 February 2013

Controversies over singularitarian utopianism

I shouldn’t have been surprised at the controversy that arose.

The cause was an hour-long lecture with 55 slides, ranging far and wide over a range of disruptive near-future scenarios, covering both upside and downside. The basic format of the lecture was: first the good news, and then the bad news. As stated on the opening slide,

Some illustrations of the enormous potential first, then some examples of how adding a high level of ambient stupidity might mean we might make a mess of it.

Ian PearsonThe speaker was Ian Pearson, described on his company website as “futurologist, conference speaker, regular media guest, strategist and writer”. The website continues, boldly,

Anyone can predict stuff, but only a few get it right…

Ian Pearson has been a full time futurologist since 1991, with a proven track record of over 85% accuracy at the 10 year horizon.

Ian was speaking, on my invitation, at the London Futurists last Saturday. His chosen topic was audacious in scope:

A Singularitarian Utopia Or A New Dark Age?

We’re all familiar with the idea of the singularity, the end-result of rapid acceleration of technology development caused by positive feedback. This will add greatly to human capability, not just via gadgets but also through direct body and mind enhancement, and we’ll mess a lot with other organisms and AIs too. So we’ll have superhumans and super AIs as part of our society.

But this new technology won’t bring a utopia. We all know that some powerful people, governments, companies and terrorists will also add lots of bad things to the mix. The same technology that lets you enhance your senses or expand your mind also allows greatly increased surveillance and control, eventually to the extremes of direct indoctrination and zombification. Taking the forces that already exist, of tribalism, political correctness, secrecy for them and exposure for us, and so on, it’s clear that the far future will be a weird mixture of fantastic capability, spoiled by abuse…

There were around 200 people in the audience, listening as Ian progressed through a series of increasingly mind-stretching technology opportunities. Judging by the comments posted online afterwards, some of the audience deeply appreciated what they heard:

Thank you for a terrific two hours, I have gone away full of ideas; I found the talk extremely interesting indeed…

I really enjoyed this provocative presentation…

Provocative and stimulating…

Very interesting. Thank you for organizing it!…

Amazing and fascinating!…

But not everyone was satisfied. Here’s an extract from one negative comment:

After the first half (a trippy sub-SciFi brainstorm session) my only question was, “What Are You On?”…

Another audience member wrote his own blogpost about the meeting:

A Singularitanian Utopia or a wasted afternoon?

…it was a warmed-over mish-mash of technological cornucopianism, seasoned with Daily Mail-style reactionary harrumphing about ‘political correctness gone mad’.

These are just the starters of negative feedback; I’ll get to others shortly. As I review what was said in the meeting, and look at the spirited ongoing exchange of comments online, some thoughts come to my mind:

  • Big ideas almost inevitably provoke big reactions; this talk had a lot of particularly big ideas
  • In some cases, the negative reactions to the talk arise from misunderstandings, due in part to so much material being covered in the presentation
  • In other cases, Isee the criticisms as reactions to the seeming over-confidence of the speaker (“…a proven track record of over 85% accuracy”)
  • In yet other cases, I share the negative reactions the talk generated; my own view of the near-future landscape significantly differs from the one presented on stage
  • In nearly all cases, it’s worth taking the time to progress the discussion further
  • After all, if we get our forecasts of the future wrong, and fail to make adequate preparations for the disruptions ahead, it could make a huge difference to our collective well-being.

So let’s look again at some of the adverse reactions. My aim is to raise them in a way that people who didn’t attend the talk should be able to follow the analysis.

(1) Is imminent transformation of much of human life a realistic scenario? Or are these ideas just science fiction?

NBIC SingularityThe main driver for belief in the possible imminent transformation of human life, enabled by rapidly changing technology, is the observation of progress towards “NBIC” convergence.

Significant improvements are taking place, almost daily, in our capabilities to understand and control atoms (Nano-tech), genes and other areas of life-sciences (Bio-tech), bits (Info-comms-tech), and neurons and other areas of mind (Cogno-tech). Importantly, improvements in these different fields are interacting with each other.

As Ian Pearson described the interactions:

  • Nanotech gives us tiny devices
  • Tiny sensors help neuroscience figure out how the mind works
  • Insights from neuroscience feed into machine intelligence
  • Improving machine intelligence accelerates R&D in every field
  • Biotech and IT advances make body and machine connectable

Will all the individual possible applications of NBIC convergence described by Ian happen in precisely the way he illustrated? Very probably not. The future’s not as predictable as that. But something similar could well happen:

  • Cheaper forms of energy
  • Tissue-cultured meat
  • Space exploration
  • Further miniaturisation of personal computing (wearable computing, and even “active skin”)
  • Smart glasses
  • Augmented reality displays
  • Gel computing
  • IQ and sensory enhancement
  • Dream linking
  • Human-machine convergence
  • Digital immortality: “the under 40s might live forever… but which body would you choose?”

(2) Is a focus on smart cosmetic technology an indulgent distraction from pressing environmental issues?

Here’s one of the comments raised online after the talk:

Unfortunately any respect due was undermined by his contempt for the massive environmental challenges we face.

Trivial contact lens / jewellery technology can hang itself, if our countryside is choked by yoghurt factory fumes.

The reference to jewellery took issue with remarks in the talk such as the following:

Miniaturisation will bring everyday IT down to jewellery size…

Decoration; Social status; Digital bubble; Tribal signalling…

In contrast, the talk positioned greater use of technology as the solution to environmental issues, rather than as something to exacerbate these issues. Smaller (jewellery-sized) devices, created with a greater attention to recyclability, will diminish the environmental footprint. Ian claimed that:

  • We can produce more of everything than people need
  • Improved global land management could feed up to 20 billion people
  • Clean water will be plentiful
  • We will also need less and waste less
  • Long term pollution will decline.

Nevertheless, he acknowledged that there are some short-term problems, ahead of the time when accelerating NBIC convergence can be expected to provide more comprehensive solutions:

  • Energy shortage is a short to mid term problem
  • Real problems are short term.

Where there’s room for real debate is the extent of these shorter-term problems. Discussion on the threats from global warming brought these disagreements into sharp focus.

(3) How should singularitarians regard the threat from global warming?

BalanceTowards the end of his talk, Ian showed a pair of scales, weighing up the wins and losses of NBIC technologies and a potential singularity.

The “wins” column included health, growth, wealth, fun, and empowerment.

The “losses” column included control, surveillance, oppression, directionless, and terrorism.

One of the first questions from the floor, during the Q&A period in the meeting, asked why the risk of environmental destruction was not on the list of possible future scenarios. This criticism was echoed by online comments:

The complacency about CO2 going into the atmosphere was scary…

If we risk heading towards an environmental abyss let’s do something about what we do know – fossil fuel burning.

During his talk, I picked up on one of Ian’s comments about not being particularly concerned about the risks of global warming. I asked, what about the risks of adverse positive feedback cycles, such as increasing temperatures triggering the release of vast ancient stores of methane gas from frozen tundra, accelerating the warming cycle further? That could lead to temperature increases that are much more rapid than presently contemplated, along with lots of savage disturbance (storms, droughts, etc).

Ian countered that it was a possibility, but he had the following reservations:

  • He thought these positive feedback loops would only kick into action when baseline temperature rose by around 2 degrees
  • In the meantime, global average temperatures have stopped rising, over the last eleven years
  • He estimates he spends a couple of hours every day, keeping an eye on all sides of the global warming debate
  • There are lots of exaggerations and poor science on both sides of the debate
  • Other factors such as the influence of solar cycles deserve more research.

Here’s my own reaction to these claims:

  • The view that global average temperatures  have stopped rising, is, among serious scientists, very much a minority position; see e.g. this rebuttal on Carbon Brief
  • Even if there’s only a small probability of a runaway spurt of accelerated global warming in the next 10-15 years, we need to treat that risk very seriously – in the same way that, for example, we would be loath to take a transatlantic flight if we were told there was a 5% chance of the airplane disintegrating mid-flight.

Nevertheless, I did not want the entire meeting to divert into a debate about global warming – “that deserves a full meeting in its own right”, I commented, before moving on to the next question. In retrospect, perhaps that was a mistake, since it may have caused some members of the audience to mentally disengage from the meeting.

(4) Are there distinct right-wing and left-wing approaches to the singularity?

Here’s another comment that was raised online after the talk:

I found the second half of the talk to be very disappointing and very right-wing.

And another:

Someone who lists ‘race equality’ as part of the trend towards ignorance has shown very clearly what wing he is on…

In the second half of his talk, Ian outlined changes in norms of beliefs and values. He talked about the growth of “religion substitutes” via a “random walk of values”:

  • Religious texts used to act as a fixed reference for ethical values
  • Secular society has no fixed reference point so values oscillate quickly.
  • 20 years can yield 180 degree shift
  • e.g. euthanasia, sexuality, abortion, animal rights, genetic modification, nuclear energy, family, policing, teaching, authority…
  • Pressure to conform reinforces relativism at the expense of intellectual rigour

A complicating factor here, Ian stated, was that

People have a strong need to feel they are ‘good’. Some of today’s ideological subscriptions are essentially secular substitutes for religion, and demand same suspension of free thinking and logical reasoning.

Knowledge GraphA few slides later, he listed examples of “the rise of nonsense beliefs”:

e.g. new age, alternative medicine, alternative science, 21st century piety, political correctness

He also commented that “99% are only well-informed on trivia”, such as fashion, celebrity, TV culture, sport, games, and chat virtual environments.

This analysis culminated with a slide that personally strongly resonated with me: a curve of “anti-knowledge” accelerating and overtaking a curve of “knowledge”:

In pursuit of social compliance, we are told to believe things that are known to be false.

With clever enough spin, people accept them and become worse than ignorant.

So there’s a kind of race between “knowledge” and “anti-knowledge”.

One reason this resonated with me is that it seemed like a different angle on one of my own favourite metaphors for the challenges of the next 15-30 years – the metaphor of a dramatic race:
Race

  • One runner in the race is “increasing rationality, innovation, and collaboration”; if this runner wins, the race ends in a positive singularity
  • The other runner in the race is “increasing complexity, rapidly diminishing resources”; if this runner wins, the race ends in a negative singularity.

In the light of Ian’s analysis, I can see that the second runner is aided by the increase of anti-knowledge: over-attachment to magical, simplistic, ultimately misleading worldviews.

However, it’s one thing to agree that “anti-knowledge” is a significant factor in determining the future; it’s another thing to agree which sets of ideas count as knowledge, and which as anti-knowledge! One of Ian’s slides included the following list of “religion substitutes”:

Animal rights, political correctness, pacifism, vegetarianism, fitness, warmism, environmentalism, anti-capitalism

It’s no wonder that many of the audience felt offended. Why list “warmism” (a belief in human-caused global warming), but not “denialism” (denial of human-caused global warming? Why list “anti-capitalism” but not “free market fundamentalism”? Why list “pacifism” but not “militarism”?

One online comment made a shrewd observation:

Ian raised my curiosity about ‘false beliefs’ (or nonsense beliefs as Ian calls them) as I ‘believe’ we all inhabit different belief systems – so what is true for one person may be false for another… at that exact moment in time.

And things can change. Once upon a time, it was a nonsense belief that the world was round.

There may be 15% of truth in some nonsense beliefs…or possibly even 85% truth. Taking ‘alternative medicine’ as an example of one of Ian’s nonsense beliefs – what if two of the many reasons it was considered nonsense were that (1) it is outside the world (the system) of science and technology and (2) it cannot be controlled by the pharmaceutical companies (perhaps our high priests of today)?

(5) The role of corporations and politicians in the approach to the singularity

One place where the right-wing / left-wing division becomes more acute in the question of whether anything special needs to be done to control the behaviour of corporations (businesses).

One of Ian’s strong positive recommendations, at the end of his presentation, was that scientists and engineers should become more actively involved in educating the general public about issues of technology. Shortly afterward, the question came from the floor: what about actions to educate or control corporations? Ian replied that he had very little to recommend to corporations, over and above his recommendations to the individuals within these corporations.

My own view is different. From my life inside industry, I’ve seen numerous cases of good people who are significantly constrained in their actions by the company systems and metrics in which they find themselves enmeshed.

Indeed, just as people should be alarmed about the prospects of super-AIs gaining too much power, over and above the humans who created them, we should also be alarmed about the powers that super-corporations are accumulating, over and above the powers and intentions of their employees.

The argument to leave corporations alone finds its roots in ideologies of freedom: government regulation of corporations often has undesirable side-effects. Nevertheless, that’s just an argument for being smarter and more effective in how the regulation works – not an argument to abstain from regulation altogether.

The question of the appropriate forms of collaborative governance remains one of the really hard issues facing anyone concerned about the future. Leaving corporations to find their own best solutions is, in my view, very unlikely to be the optimum approach.

In terms of how “laissez-faire” we should be, in the face of potential apocalypse down the road, I agree with the assessment near the end of Jeremy Green’s blogpost:

Pearson’s closing assertion that in the end our politicians will always wake up and pull us back from the brink of any disaster is belied by many examples of civilisations that did not pull back and went right over the edge to destruction.

Endnote:

After the presentation in Birkbeck College ended, around 40-50 of the audience regrouped in a nearby pub, to continue the discussion. The discussion is also continuing, at a different tempo, in the online pages of the London Futurists meetup. Ian Pearson deserves hearty congratulation for stirring up what has turned out to be an enlightening discussion – even though there’s heat in the comments as well as light!

Evidently, the discussion is far from complete…

30 December 2011

Factors slowing the adoption of tablet computers in hospital

Filed under: Connected Health, mHealth, security, tablets, usability — David Wood @ 12:35 pm

Tablet computers seem particularly well suited to usage by staff inside hospitals.  They’re convenient and ergonomic.  They put huge amounts of relevant information right in the hands of clinicians, as they move around wards.  Their screens allow display of complex medical graphics, which can be manipulated in real time.  Their connectivity means that anything entered into the device can (in contrast to notes made on old-world paper pads) easily be backed up, stored, and subsequently searched.

Here’s one example, taken from an account by Robert McMillan in his fascinating Wired Enterprise article “Apple’s Secret Plan to Steal Your Doctor’s Heart“:

Elliot Fishman, a professor of radiology at Johns Hopkins… is one of a growing number of doctors who look at the iPad as an indispensable assistant to his medical practice. He studies 50 to 100 CT scans per day on his tablet. Recently, he checked up on 20 patients in his Baltimore hospital while he was traveling in Las Vegas. “What this iPad does is really extend my ability to be able to consult remotely anytime, anywhere,” he says. “Anytime I’m not at the hospital, I’m looking at the iPad.”

For some doctors at Johns Hopkins, the iPad can save an hour to an hour and a half per day — time that would otherwise be spent on collecting paper printouts of medical images, or heading to computer workstations to look them up online. Many doctors say that bringing an iPad to the bedside lets them administer a far more intimate and interactive level of care than they’d previously thought possible. Even doctors who are using an iPad for the first time often become attached, Fishman says. “Their biggest fear is what if we took it away.”

However, a thoughtful review by Jenny Gold, writing in Kaiser Health News, points out that there are many factors slowing down the adoption of tablets in hospital:

iPads have been available since April 2010, but less than one percent of hospitals have fully functional tablet systems, according to Jonathan Mack, director of clinical research and development at the West Wireless Health Institute, a San Diego-based nonprofit focused on lowering the cost of health care through new technology…

UC San Diego Health System’s experience with iPads illustrates both the promise and the challenge of using tablet technology at hospitals. Doctors there have been using the iPad since it first came out, but a year and a half later, only 50 to 70 –less than 10 percent of physicians– are using them…

Here’s a list of the factors Gold notes:

  1. The most popular systems for electronic medical records (EMRs) don’t yet make apps that allow doctors to use EMRs on a tablet the way they would on a desktop or laptop. To use a mobile device effectively requires a complete redesign of the way information is presented.  For example, the EMR system used at UC San Diego is restricted to a read-only app for the iPad, meaning it can’t be used for entering all new information.  (To get around the problem, doctors can log on through another program called Citrix. But because the product is built on a Windows platform and meant for a desktop, it can be clunky on an iPad and difficult to navigate.)
  2. Spotty wireless coverage at the hospital means doctors are logged off frequently as they move about the hospital, cutting off their connection to the EMR
  3. The iPad doesn’t fit in the pocket of a standard white lab coat. Clinicians can carry it around in a messenger bag, but it’s not convenient
  4. There are also worries about the relative newness of the technology, and whether adequate vetting has taken place over patient privacy or data security.  For example, as my former Symbian colleague Tony Naggs asks, what happens if tablets are lost or stolen?
  5. Some clinicians complain that tablet computers are difficult to type on, especially if they have “fat fingers”.

Let’s take another look at each of these factors.

1. Mobile access to EMRs

Yes, there are significant issues involved:

  • The vast number of different EMRs in use.  Black Book Rankings regularly provide a comparative evaluation of different EMRs, including a survey released on 3 November 2011 that covered 422 different systems
  • Slower computing performance on tablets, whose power inevitably lags behind desktops and laptops
  • Smaller display and lack of mouse means the UI needs to be rethought.

However, as part of an important convergence of skillsets, expert mobile software developers are learning more and more about the requirements of medical systems.  So it’s only a matter of time before mobile access to EMRs improves – including write access as well as read access.

Note this will typically require changes on both the handset and the EMR backend, to support the full needs of mobile access.

2. Intermittent wireless coverage

In parallel with improvements on software, network improvements are advancing.  Next generation WiFi networks are able to sustain connections more reliably, even in the complex topography of hospitals.

Note that the costs of a possible WiFi network upgrade need to be born in mind when hospitals are considering rolling out tablet computer solutions.

3. Sizes of devices

Tablets with different screen sizes are bound to become more widely deployed.  Sticking with a small number of screen sizes (for example, just two, as in the case with iOS) has definite advantages from a programmers point of view, since fewer different screen configurations need to be tested.  But the increasing imperative to supply devices that are intermediate in size between smartphone and iPad means that at least some developers will become smarter in supporting a wider range of screen sizes.

4. Device security

Enterprise software already has a range of solutions available to manage a suite of mobile devices.  This includes mechanisms such as remote lockdown and remote wipe, in case any device becomes lost or stolen.

With sufficient forethought, these systems can even be applied in cases when visiting physicians want to bring their own, personal handheld computer with them to work in a particular hospital.  Access to the EMR of that hospital would be gated by the device first agreeing to install some device management software which monitors the device for subsequent inappropriate usage.

5. New user interaction modes

Out of all the disincentives to wider usage of tablet computers in hospitals, the usability issue may be the most significant.

Usability paradigms that make sense for devices with dedicated keyboards probably aren’t the most optimal when part of the screen has to double as a makeshift keyboard.  This can cause the kind of frustration voiced by Dr. Joshua Lee, chief medical information officer at UC San Diego (as reported by Karen Gold):

Dr Lee occasionally carries his iPad in the hospital but says it usually isn’t worth it.  The iPad is difficult to type on, he complains, and his “fat fingers” struggle to navigate the screen. He finds the desktop or laptop computers in the hospital far more convenient. “Are you ever more than four feet away from a computer in the hospital? Nope,” he says. “So how is the tablet useful?”

But that four feet gap (and it’s probably frequently larger than that) can make all the difference to the spontaneity of an interaction.  In any case, there are many drawbacks to using a standard PC interface in a busy clinical setting.  Robert McMillan explains:

Canada’s Ottawa Hospital uses close to 3,000 iPads, and they’re popping up everywhere — in the lab coats of attending physicians, residents, and pharmacists. For hospital CIO Dale Potter, the iPad gave him a way out of a doomed “computer physician order entry” project that was being rolled out hospital-wide when he started working there in 2009.

It sounds complicated, but computerized physician order entry really means something simple: replacing the clipboards at the foot of patient’s beds with a computer, so that doctors can order tests, prescribe drugs, and check medical records using a computer rather than pen and paper. In theory, it’s a great idea, but in practice, many of these projects have failed, in part because of the clunky and impersonal PC interfaces: Who really wants to sit down and start clicking and clacking on a PC, moving a mouse while visiting a patient?

Wise use of usability experience design skills is likely to result in some very different interaction styles, in such settings, in the not-too-distant future.

Aside: if even orang utans find ways to enjoy interacting with iPads, there are surely ways to design UIs that suit busy, clumsy-fingered medical staff.

6. Process transformation

That leads to one further thought.  The biggest gains from tablet computers in hospitals probably won’t come from merely enabling clinicians to follow the same processes as before, only faster and more reliably (important though these improvements are).  More likely, the handy availability of tablets will enable clinicians to devise brand new processes – processes that were previously unthinkable.

As with all process change, there will be cultural mindset issues to address, in addition to ensuring the technology is fit for purpose.  No doubt there will be some initial resistance to new ways of doing things.  But in time, with the benefit of positive change management, good new habits will catch on.

29 December 2011

From hospital care to home care – the promise of Connected Health

Filed under: challenge, Connected Health, converged medicine, healthcare, mHealth, usability — David Wood @ 12:01 pm
  • At least one in four hospital patients would be better off being treated by NHS staff at home

That claim is reported on today’s BBC news website.  The article addresses an issue that is important from several viewpoints: social, financial, and personal:

NHS Confederation: Hospital-based care ‘must change’

The NHS in England must end the “hospital-or-bust” attitude to medical care, says the body representing health service trusts.

At least one in four patients would be better off being treated by NHS staff at home, figures suggest.

2012 will be a key year for the NHS as it tries to make £20bn in efficiency savings by 2015, according to the head of the NHS Confederation, Mike Farrar.

Ministers say modernising the NHS will safeguard its future.

Mr Farrar said: “Hospitals play a vital role but we do rely on them for some services which could be provided elsewhere.

“We should be concentrating on reducing hospital stays where this is right for patients, shifting resources into community services, raising standards of general practice, and promoting early intervention and self-care.

“There is a value-for-money argument for doing this, but it is not just about money and the public need to be told that – this is about building an NHS for the future.”

Mr Farrar said the required changes included treating frail people in their homes, and minimising hospital stays wherever possible.

Politicians and NHS leaders must show the public how these changes could improve care, rather than focusing on fears over the closure of hospital services, he added.

“Many of our hospitals know that the patients that they are treating in their beds on any given day could be treated better – with better outcomes for them and their families – if they were treated outside of hospitals in community or primary care,” he told BBC Radio 4’s Today programme.

Mr Farrar told Today that people had become used to “the hospital being a place of default” and that primary and community healthcare services had sometimes been under-funded.

But he said even where clinicians knew that better care could be provided outside of hospitals, and politicians accepted this privately, the public debate had not helped individuals understand that…

Some of the replies posted online are sceptical:

As a medical doctor based in hospitals, I believe this will not work logistically. Patients are sent to hospitals as they don’t get the specialist care in the community as the skills/services are inadequate/not in place. Patient attitudes must change as many come to a+e against GP advice as they don’t have confidence in community care…

As long as the selfish British public can’t be bothered looking after their own relatives and see hospitals as convenient granny-dumping centres, there is absolutely no way this would work.

There can not be a perfect solution. Not every family can care for a sick person full time, often due to them working. Hospital care may not be a perfect, yet in some cases it does free relatives to be able to work.  Outsourcing care too has a major downside, my wife has done that for years. 15 mins twice a day, can hardly be called acceptable if you apply some form of dignity to the patient.

I saw too many patients I nursed(often elderly or with pre-existing health conditions) kept in hospital too long because no one to care for them at home/wider community. This wasn’t great for them but also blocked an acute bed for someone else. In recent years the pendulum’s swung too far the other way: too many patients discharged without adequate support…

In summary: care in the community would be better in many, many cases, but it’s demanding and challenging:

  • There are social challenges: relatives struggle to put their own lives and careers on hold, to act as caregivers.
  • There are financial challenges: funding for medicine is often preferentially directed to large, centralised hospitals.
  • There are skills challenges: observation of complicated chronic health conditions is more easily carried out in the proximity of specialists.

However, the movement “from hospital care to home care” continues to gather steam – for good reason.  This was a major theme of the mHealth Summit I attended earlier this month in Washington DC.  I was particularly struck by a vision articulated by Rick Cnossen, director of worldwide health information technology at Intel:

In the next 10 years 50% of health care could be provided through the “brickless clinic,” be it the home, community, workplace or even car

As reported in the summary article by Kate Ackerman, “mHealth: Closing the Gap Between Promise and Adoption“:

Cnossen said the technology — such as mobile tools, telehealth, personal health records and social networking — already exists to make this possible. He said, “We have the technology. … It’s time to move out on it.”

Fellow speaker Hamadoun Toure, secretary general of the International Telecommunication Union took up the same theme:

Mobile phones will increase personal access to health information, mHealth and broadband technology will improve data collection and disease surveillance, patient monitoring will improve and become more prevalent, and remote consulting and diagnosis will be enhanced, thanks to low-cost devices.

“In the near future, more people will access the Internet through mobile devices than through fixed devices,” Toure said. “We are witnessing the fastest change in human history, and I believe (we have) a great opportunity for social development.”

Connected health technology enables better remote monitoring of personal medical data, earlier warnings of potential relapses, remote diagnostics, quicker access to technical information, better compliance with prescription regimes, and much, much more.

But Kate Ackerman raises the question,

So if the technology already exists and leaders from both the public and private sectors see the need, why has progress in mobile health been slow?

It’s an important question.  Intel’s Rick Cnossen gives his answer, as follows:

“The challenge is not a technology problem, it’s a business and a workflow problem.”

Cnossen said, “At the end of the day, mHealth is not about smartphones, gadgets or even apps. It’s about holistically driving transformation,” adding, “mHealth is about distributing care beyond clinics and hospitals and enabling new information-rich relationships between patients, clinicians and caregivers to drive better decisions and behaviors…”

He said health care clinicians can be resistant to change, adding, “We need to introduce technology into the way to do their business, not the other way around.”

Cnossen also said that payment reform is essential for “mHealth to survive and thrive.” He said, “We should not be fighting for reimbursement codes for each health device and app. That is ultimately a losing proposition. Instead, we must fight for payment reform to pay for value over volume, regardless of whether the care was provided in a bricks and mortar facility or was it at the home or virtually through electronic means.”

Personally, I would put the emphasis differently:

The challenge is not just a technology problem, it’s also a business and a workflow problem

Moreover, as the technology keeps on improving, it can often diminish the arguments that are raised against its adoption.  Improvements in quality, reliability, miniaturisation, and performance all make a difference.  Improvements in usability may make the biggest difference of all, as people find the experience in using the new technology to be increasingly reassuring.

I’ll finish by noting an excerpt from the keynote at the same conference by Kathleen Sebelius, Secretary, U.S. Department of Health and Human Services:

This is an incredible time to be having this conversation. When we talk about mobile health, we are talking about taking the biggest technology breakthrough of our time and using it to take on one of the greatest … challenges of our time. And while we have a way to go, we can already imagine a remarkable future in which control over your health is always within hand’s reach…

This future is not here yet, but it is within sight. And I look forward to working with you to achieve it.

3 January 2011

Some memorable alarm bugs I have known

Filed under: Apple, Psion, usability — David Wood @ 12:24 pm

Here’s how the BBC website broke the news:

iPhone alarms hit by New Year glitch

A glitch on Apple’s iPhone has stopped its built-in alarm clock going off, leaving many people oversleeping on the first two days of the New Year.

Angry bloggers and tweeters complained that they had been late for work, and were risking missing planes and trains.

My first reaction was incredulity.  How could such a first class software engineering company like Apple get such basic functionality wrong?

I remember being carefully instructed, during my early days as a young software engineer with PDA pioneer Psion, that alarms were paramount.  Whatever else your mobile device might be doing at the time – however busy or full or distracted it might be – alarms had to go off when they became due.  Users were depending on them!

For example, even if the battery was too low, when the time came, to power the audio clip that a user had selected for an alarm, Psion’s EPOC operating system would default to a rasping sound that could be played with less voltage, but which was still loud enough that the user would notice.

Further, the startup sequence of a Psion device would take care to pre-allocate sufficient resources for an alarm notifier – both in the alarm server, and in the window server that would display the alarm.  There must be no risk of running out of memory and, therefore, not being able to operate the alarm.

However, as I thought more, I remembered various alarm bugs in Psion devices.

Note: I’ve probably remembered some of the following details wrong – but I think the main gist of the stories is correct.

Insisting on sounding ALL the alarms

The first was from before I started at Psion, but was a legend that was often discussed. It applied to the alarm functionality in the Psion Organiser II.

On that device, all alarms were held in a queue, and for each alarm, there was a record of whether it had been sounded.  When the device powered up, one of the first thing it would do was to check that queue for the first alarm that had not been sounded.  If it was overdue, it should be sounded immediately.  Once that alarm was acknowledged by the user, the same process should be repeated – find the next alarm that had not been sounded…

But the snag in this system became clear when the user manually advanced the time on the device (for example, on changing timezone, or, more dramatically, restoring the correct time after a system restart).  If a user had set a number of alarms, the device would insist on playing them all, one by one.  The user had no escape!

Buffer overflow (part one)

The next story on my list came to a head on a date something like the 13th of September 1989.  The date is significant – it was the first Wednesday (the day with the longest name) with a two-digit day-in-month in September (the month with the longest name).  You can probably guess how this story ends.

At that time, Psion engineers were creating the MC400 laptop – a device that was in many ways ahead of its time.  (You can see some screenshots here – though none of these shots feature the MC Alarms application.  My contribution to that software, by the way, included the Text Processor application, as well as significant parts of the UI framework.)

On the day in question, several of the prototype MC400 devices stopped working.  They’d all been working fine over the previous month or so.  Eventually we spotted the pattern – they all had alarms due, but the text for the date overflowed the pre-allocated memory storage that had been set aside to compose that text as it was displayed on the screen.  Woops.

“The kind of bug that other operating systems can only dream about”

Some time around 1991 I made a rash statement, which entered into Psion’s in-house listing of ill-guarded comments: “This is the kind of bug that other operating systems can only dream about”.  It was another alarms bug – this time in the Psion Series 3 software system.

It arose when the user had an Agenda file on a memory card (which were known, at the time, as SSDs – Solid State Disks), but had temporarily removed the card.  When the time came to sound an alarm from the Agenda, the alarm server requested the Agenda application to tell it when the next Agenda alarm would be due.  This required the Agenda application to read data from the memory card.  Because the file was already marked as “open”, the File Server in the operating system tried to display a low-level message on the screen – similar to the “Retry, Abort, or Cancel” message that users of MS-DOS might remember.  This required action from the Window Server, but the Window Server was temporarily locked, waiting for a reply from the Alarm Server.  The Alarm Server was in turn locked, waiting for the File Server – which, alas, was waiting (as previously mentioned) for the Window Server.  Deadlock.

Well, that’s as much as I can recall at the moment, but I do remember it being said at the time that the deadlock chain actually involved five interconnecting servers, so I may have forgotten some of the subtleties.  Either way, the result was that the entire device would freeze.  The only sign of life was that the operating system would still emit keyclicks when the user pressed keys – but the Window Server was unable to process these keys.

In practice, this bug would tend to strike unsuspecting users who had opened an SSD door at the time the alarm happened to be due – even the SSD door on the other side of the device (an SSD could be inserted on each side).  The hardware was unable to read from one SSD, even if it was still in place, if the other door happened to be open.  As you can imagine, this defect took some considerable time to track down.

“Death city Arizona”

At roughly the same time, an even worse alarms-related bug was uncovered.  In this case, the only way out was a cold reset, that lost all data on internal memory.  The recipe to obtain the bug went roughly as follows:

  • Supplement the built-in data of cities and countries, by defining a new city, which would be your home town
  • Observe that the operating system created a file “World.wld” somewhere on internal memory, containing the details of all the cities whose details you had added or edited
  • Find a way to delete that file
  • Restart the device.

In those days of limited memory, every extra server was viewed as an overhead to be avoided if possible.  For this reason, the Alarm Server and the World Server coexisted inside a single process, sharing as many resources as possible.  The Alarm Server managed the queue of alarms, from all different applications, and the World Server looked after access to the set of information about cities and countries.  For fast access during system startup, the World Server stored some information about the current home city.  But if the full information about the home city couldn’t be retrieved (because, for example, the user had deleted the World.wld file), the server went into a tailspin, and crashed.  The lower level operating system, noticing that a critical resource had terminated, helpfully restarted it – with identical conclusions.  Result: the lower priority applications and servers never had a chance to start up.  The user was left staring at a blank screen.

Buffer overflow (part two)

The software that composed the text to appear on the screen, when an alarm sounded, used the EPOC equivalent of “print with formatting”.  For example, a “%d” in the text would be replaced by a numerical value, depending on other parameters passed to the function.  Here, the ‘%’ character has a special meaning.

But what if the text supplied by the user itself contains a ‘%’ character?  For example, the alarm text might be “Revision should be 50% complete by today”.  Well, in at least some circumstances, the software went looking for another parameter passed to it, where none existed.  As you can imagine, all sorts of unintended consequences could result – including memory overflows.

Alarms not sounding!

Thankfully, the bugs above were all caught by in-house testing, before the device in question was released to customers.  We had a strong culture of fierce internal testing.  The last one, however, did make it into the outside world.  It impacted users who had the temerity to do the following:

  • Enter a new alarm in their Agenda
  • Switch the device off, before it had sufficient time to complete all its processing of which alarm would be the next to sound.

This problem hit users who accumulated a lot of data in their Agenda files.  In such cases, the operating system could take a non-negligible amount of time to reliably figure out what the next alarm would be.  So the user had a chance to power down the device before it had completed this calculation.  Given the EPOC focus on keeping the device in a low-power state as much as possible, the “Off” instruction was heeded quickly – too quickly in this case.  If the device had nothing else to do before that alarm was due, and if the user didn’t switch on the device for some other reason in the meantime, it wouldn’t get the chance to work out that it should be sounding that alarm.

Final thoughts re iPhone alarms

Psion put a great deal of thought into alarms:

  • How to implement them efficiently
  • How to ensure that users never missed alarms
  • How to provide the user with a great alarm experience.

For example, when an alarm becomes due on a Psion device, the sound starts quietly, and gradually gets louder.  If the user fails to acknowledge the alarm, the entire sequence repeats, after about one minute, then after about three minutes, and so on.  When the user does acknowledge the alarm, they have the option to stop it, silence it, or snooze it.  Pressing the snooze button adds another five minutes to the time before the alarm will sound again.  Pressing it three times, therefore, adds 15 minutes, and so on.  (And as a touch of grace: if you press the snooze button enough times, it emits a short click, and resets the time delay to five minutes – useful for sleepyheads who are too tired to take a proper look at the device, but who have enough of a desire to monitor the length of the snooze!)

So it’s surprising to me that Apple, with its famous focus on user experience, seem to have given comparatively little thought to the alarms on that device.  When my wife started using an iPhone in the middle of last year, she found much in it to enchant her – but the alarms were far from delightful.  It seems that the default alarms sound only once, with a rather pathetic little noise which it is easy to miss.  And when we looked, we couldn’t find options to change this behaviour.  I guess the iPhone team has other things on its mind!

13 September 2010

Accelerating Nokia’s renewal

Filed under: leadership, Nokia, openness, software management, time to market, urgency, usability — David Wood @ 8:29 pm

“The time is right to accelerate the company’s renewal” – Jorma Ollila, Chairman of the Nokia Board of Directors, 10 Sept 2010

I’ve been a keen Nokia watcher since late 1996, when a group of senior managers from Nokia visited Psion’s offices in Sentinel House, near Edgware Road station in London.  These managers were looking for a mobile operating system to power new generations of devices that would in time come to be called smartphones.

From my observations, I fairly soon realised that Nokia had world-class operating practice.  At the time, they were “one of the big three” – along with Motorola and Ericsson.  These three companies had roughly the same mobile phone market share – sometimes with one doing a little better, sometimes with another doing a little better.  But the practices I was able to watch at close quarters, over more than a decade, drove Nokia’s position ever higher.  People stopped talking about “the big three” and recognised Nokia as being in a league of its own.

In recent times, of course, this market dominance has taken a major hit.  Unit volume sales continue high, but the proportion of mobile industry profits won by Nokia has seen significant decline, in the face of new competition.  It’s no surprise that Nokia’s Chairman, Jorma Ollila, has declared the need to accelerate the company’s renewal.

Following the dramatic appointment earlier this week of a new CEO, Stephen Elop, I’ve already been asked on many occasions what advice I would offer the new CEO.  Here’s what I would say:

1. Ensure faster software execution – by improving software process quality

Delays in Nokia’s releases – both platform releases and product releases – mean that market windows are missed.  Nokia’s lengthy release lifecycles compare poorly to what more nimble competitors are achieving.

Paradoxically, the way to achieve faster release cycles is not to focus on faster release cycles.  The best way to ensure customer satisfaction and predictable delivery, is, counter-intuitively, to focus more on software quality, interim customer feedback, agile project management, self-motivated teams, and general principles of excellence in software development, than on schedule management itself.

It’s in line with what software process expert Steve McConnell says,

  • IBM discovered 20 years ago that projects that focused on attaining the shortest schedules had high frequencies of cost and schedule overruns;
  • Projects that focused on achieving high quality had the best schedules and the highest productivities.

The experience of Symbian Software Ltd over many years bears out the same conclusion. The more we in Symbian Ltd focused on achieving high quality, the better we became with both schedule management and internal developer productivity.

Aside: see this previous blogpost for the argument that

In a company whose culture puts a strong emphasis upon fulfilling commitments and never missing deadlines, the agreed schedules are often built from estimations up to twice as long as the individually most likely outcome, and even so, they often miss even these extended deadlines…

2. Recognise that quality trumps quantity

Large product development teams risk falling foul of Brooks’s Law: Adding manpower to a late software project makes it later.  In other words, too many cooks spoil the broth.  Each new person, or each new team, introduces new relationships that need to be navigated and managed.  More and more effort ends up in communications and bureaucracy, rather than in “real work”.

Large product development teams can also suffer from a diminution of individual quality.  This is summed up in the saying,

A-grade people hire A-grade people to work for them, but B-grade people hire C-grade people to work for them.

Related to this, in large organisations, is the Peter Principle:

In a hierarchy every employee tends to rise to their level of incompetence.

Former Nokia executive Juhani Risku recently gave a lengthy interview to The Register.  Andrew Orlowski noted the following:

One phrase repeatedly came up in our conversation: The Peter Principle. This is the rule by which people are promoted to their own level of incompetence. Many, but not all of Nokia’s executives have attained this goal, claims Risku.

One thing that does seem to be true is that Nokia’s product development teams are larger than comparable teams in other companies.  Nokia’s new CEO needs to ensure that the organisation is simplified and make more effective.  However, in the process, he should seek to retain the true world-class performers and teams in the company he is inheriting.  This will require wise discrimination – and an inspired choice of trusted advisors.

3. Identify and enable people with powerful product vision

A mediocre product delivered quickly is better than a mediocre product delivered late.  But even better is when the development process results in a product with great user appeal.

The principle of “less is more” applies here.  A product that delivers 50% of the functionality, superbly implemented, is likely to outsell a product that has 100% of the functionality but a whole cluster of usability issues.  (And the former product will almost certainly generate better public reaction.)

That’s why a relentless focus on product design is needed.  Companies like RIM and Apple have powerful product designers who are able to articulate and then boldly defend their conceptions for appealing new products – all the way through to these products reaching the market.  Although these great designers are sensitive to feedback from users, they don’t allow their core product vision to be diluted by numerous “nice ideas” that complicate the execution of the core tasks.

Nokia’s new CEO needs to identify individuals (from either inside or outside the existing organisation) who can carry out this task for Nokia’s new products.  Then he needs to enable these individuals to succeed.

For a compelling account of how Jeff Hawkins acted with this kind single-minded focus on a “simply great product” at Palm, I recommend the book “Piloting Palm: The Inside Story of Palm, Handspring and the Birth of the Billion Dollar Handheld Industry” by Andrea Butter and David Pogue.

4. Build the absorptive capacity that will allow Nokia to benefit from openness

Nokia has often talked about Open Innovation, and has made strong bets in favour of open source.  However, it appears that it has gained comparatively little from these bets so far.

In order to benefit more fully from contributions from external developers, Nokia needs to build additional absorptive capacity into its engineering teams and processes.  Otherwise, there’s little point in continuing down the route of “openness”.  However, with the absorptive capacity in place, the underlying platforms used by Nokia should start accelerating their development – benefiting the entire community (including Nokia).

For more on some of the skills needed, see my article Open Source: necessary but not sufficient.

5. Avoid rash decisions – first, find out what is really happening

I would advise Nokia’s new CEO to urgently bring in expert software process consultants, to conduct an audit of both the strengths and the weaknesses of Nokia’s practices in software development.

To determine which teams really are performing well, and which are performing poorly, it’s not sufficient to rely on any general principle or hearsay.  Instead, I recommend the Lean principle of Genba, Genbutsu, Genjitsu:

Genba means the actual place
Genbutsu means the real thing, the actual thing
Genjitsu means the actual situation

Or, colloquially translated:

Go and see
Get the facts
Grasp the situation

6. Address the Knowing-Doing Gap

The advice I offer above is far from being alien to Nokia.  I am sure there are scores of senior managers inside Nokia who already know and appreciate the above principles.  The deeper problem is one of a “knowing doing gap”.

I’ve written on this topic before.  For now, I’ll just state the conclusion:

The following set of five characteristics distinguish companies that can successfully bridge the knowing-doing gap:

  1. They have leaders with a profound hands-on knowledge of the work domain;
  2. They have a bias for plain language and simple concepts;
  3. They encourage solutions rather than inaction, by framing questions asking “how”, not just “why”;
  4. They have strong mechanisms that close the loop – ensuring that actions are completed (rather than being forgotten, or excuses being accepted);
  5. They are not afraid to “learn by doing”, and thereby avoid analysis paralysis.

Happily for Nokia, Stephen Elop’s background seems to indicate that he will score well on these criteria.

9 February 2010

Improving mobile phone usability

Filed under: Barcelona, usability — David Wood @ 9:42 am

One fundamental step to unlocking the full transformational potential of smart mobile technology is to significantly improve the usability of multi-function devices.  As additional features have been added into mobile phones, there’s been a natural tendency for each new feature to detract from the overall ease of use of the device:

  • It’s harder for users to locate the exact function that they wish to use at any given time;
  • It’s harder for users to understand the full set of functions that are available for them to use.

This has led to feelings of frustration and disenchantment.  Devices are full of powerful functionality that is under-used and under-appreciated.

Recognising this problem, companies throughout the mobile industry are exploring approaches to improving the usability of multi-function devices.

One common idea is to try to arrange all the functionality into a clear logical hierarchy.  But as the number of available functions grows and grows, the result is something that is harder and harder to use, no matter how thoughtfully the functions are arranged.

A second common idea is to allow users to select the applications that they personally use the most often, and to put shortcuts to these applications onto the homescreen (start screen) of the phone.  That’s a step forwards, but there are drawbacks with this as well:

  1. The functionality that users want to access is more fine-grained than simply picking an application.  Instead, a user will often have a specific task in mind, such as “phone Mum” or “email Susie” or “check what movies are showing this evening”;
  2. The functionality that users want to access the most often varies depending on the context the user is in – for example, the time of day, or the user’s location;
  3. The UI to creating these shortcuts can be time-consuming or intimidating.

In this context, I’ve recently been looking at some technology developed by the startup company Intuitive User Interfaces.  The founders of Intuitive previously held key roles with the company ART (Advanced Recognition Technologies) which was subsequently acquired by Nuance Communications.

Intuitive highlight the following vision:

Imagine a phone that knows what you need, when you need it, one touch away.

Briefly, the technology works as follows:

  1. An underlying engine observes which tasks the user performs frequently, and in which circumstances;
  2. These tasks are made available to the user via a simple top-level one-touch selection screen;
  3. The set of tasks in this screen vary depending on user context.

Intuitive will be showing their system, running on an Android phone, at the Mobile World Congress at Barcelona next week.  Ports to other platforms are in the works.

Of course, software that tries to anticipate a user’s actions has sometimes proved annoying rather than helpful.  Microsoft’s “paperclip” Office Assistant became particularly notorious:

  • It was included in versions of Microsoft Office from 1997 to 2003 – with the intention of providing advice to users when it deduced that they were trying to carry out a particular task;
  • It was widely criticised for being intrusive and unhelpful;
  • It was excluded from later versions;
  • Smithsonian magazine in 2007 called this paperclip agent “one of the worst software design blunders in the annals of computing“.

It’s down to the quality of the underlying engine whether the context-dependent suggestions provided to the user are seen as helpful or annoying.  Intuitive describe the engine in their product as “using sophisticated machine learning algorithms” in order to create “a statistically driven model”.  Users’ reactions to suggestions also depend on the UI of the suggestion system.

Personally, I’m sufficiently interested in this technology to have joined Intuitive’s Advisory Board.  If anyone would like to explore this technology further, in meetings at Barcelona, please get in touch!

For other news about Intuitive User Interfaces, please see their website.

13 January 2010

AI: why, and when

Filed under: AGI, usability — David Wood @ 4:26 pm

Here’s a good question, raised by Paul Beardow:

One question that always rattles around in my mind is “why are we trying to recreate the human mind anyway?” We have billions of those already…

You can build something that appears to be human, but what is the point of that? Why chase an goal that doesn’t actually provide us with more than we have already?

Paul also says,

What I don’t want is AI in products so that they have their own personality, but a better understanding of my own wishes and desires in how that product should interact with me…

I personally also really don’t think that logic by itself can lead to a system that can evolve human-like imagination, feelings or personality, nor that the human mind can be reduced to being a machine. It has elementary parts, but the constant rebuilding and evolving of information doesn’t really follow any logical rules that can be programmed. The structure of the brain depends on what happens to us during the day and how we interpret it according to the situation. That defies logic most of the time and is constantly evolving and changing.

My answer: there are at least six reasons why people are pursing the goal of human-like AI.

1. Financial savings in automated systems

We’re already used to encountering automated service systems when using the phone (eg to book a cinema ticket: “I think you’re calling about Kingston upon Thames – say Yes or No”) or when navigating a web form or other user interface.  These systems provoke a mixture of feelings in the people who use them.  I often become frustrated, thinking it would be faster to speak directly to a “real human being”.  But on other occasions the automation works surprisingly well.

To widen the set of applicability of such systems, into more open-ended environments, will require engineering much more human-style “common sense” into these automated systems.  The research to accomplish this may cost lots of money, but once it’s working, it could enable considerable cost savings in service provision, as real human beings can be replaced in a system by smart pieces of silicon.

2. Improving game play

A related motivation is as follows: games designers want to program in human-level intelligence into characters in games, so that these artificial entities manifest many of the characteristics of real human participants.

By the way: electronic games are big money!  As the description of tonight’s RSA meeting “Why games are the 21st century’s most serious business” puts it:

Why should we be taking video games more seriously?

  • In 2008 Nintendo overtook Google to become the world’s most profitable company per employee.
  • The South Korean government will invest $200 billion into its video games industry over the next 4 years.
  • The trading of virtual goods within games is a global industry worth over $10 billion a year.
  • Gaming boasts the world’s fastest-growing advertising market.

3. Improved user experience with complex applications

As well as reducing cost, human-level AI can in principle improve the experience of users while interacting with complex applications.

Rather than users thinking, “No you stupid machine, why don’t you realise what I’m trying to do…”, they will be pleasantly surprised: “Ah yes, that was in fact what I was trying to accomplish – how did you manage to figure that out?”

It’s as Paul says:

What I … want … in products [is]… a better understanding of my own wishes and desires in how that product should interact with me

These are products with (let’s say it) much more “intelligence” than at present.  They observe what is happening, and can infer motivation.  I call this AI.

4. A test of scientific models of the human mind

A different kind of motivation for studying human-level AI is to find ways of testing our understanding of the human mind.

For example, I think that creativity can be achieved by machines, following logical rules.  (The basic rules are: generate lots of ideas, by whatever means, and then choose the ideas which have interesting consequences.)  But it is good to test this.  So, computers can be programmed to mimic the possible thought patterns of great composers, and we can decide whether the output is sufficiently “creative”.

(There’s already quite a lot of research into this.  For one starting point, see the EE Times article “Composer harnesses artificial intelligence to create music“.)

Similarly, it will be fascinating to hear the views of human-level AIs about (for example) the “Top 5 Unsolved Brain Mysteries“.

5. To find answers to really tough, important questions

The next motivation concerns the desire to create AIs with considerably greater than human-level AI.  Assuming that human-level AI is a point en route to that next destination, it’s therefore an indirect motivation for creating human-level AI.

The motivation here is to ask superAIs for help with really tough, difficult questions, such as:

  • What are the causes – and the cures – for different diseases?
  • Are there safe geoengineering methods that will head off the threat of global warming, without nasty side effects?
  • What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
  • What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
  • What is the resolution of the conflict between theories of gravity and theories of all the other elementary forces?

6. To find ways of extending human life and expanding human experience

If the above answers aren’t sufficient, here’s one more, which attracts at least some researchers to the topic.

If some theories of AI are true, it might be possible to copy human awareness and consciouness from residence in a biological brain into residence inside silicon (or other new computing substrate).  If so, then it may open new options for continued human consciousness, without having to depend on the fraility of a decaying human body.

This may appear a very slender basis for hope for significantly longer human lifespan, but it can be argued that all the other bases for such hope are equally slender, if not even less plausible.

OK, that’s enough answers for “why”.  But about the question “when”?

In closing, let me quickly respond to a comment by Martin Budden:

I’m not saying that I don’t believe that there will be advances in AI. On the contrary I believe, in the course of time, there will be real and significant advances in “general AI”. I just don’t believe that these advances will be made in the next decade.

What I’d like, at this point, is to be able to indicate some kind of provisional roadmap (also known as “work breakdown”) for when stepping stones of progress towards AGI might happen.

Without such a roadmap, it’s too difficult to decide when larger steps of progress are likely.  It’s just a matter of different people appearing to have different intuitions.

To be clear, discussions of Moore’s Law aren’t sufficient to answer this question.  Progress with the raw power of hardware is one thing, but what we need here is an estimate of progress with software.

Sadly, I’m not aware of any such breakdown.  If anyone knows one, please speak up!

Footnote: I guess the best place to find such a roadmap will be at the forthcoming “Third Conference on Artificial General Intelligence” being held in Lugano, Switzerland, on 5-8 March this year.

3 September 2008

Restrictions on the suitability of open source?

Filed under: Open Source, security, usability — David Wood @ 8:56 am

Are there restrictions on the suitability of open source methods? Are there kinds of software for which closed source development methods are inherently preferable and inherently more likely to succeed?

These questions are part of a recent discussion triggered by Nokia’s Ari Jaaksi’s posting “Different ways and paradigms” that looked for reasons why various open source software development methods might be applicable to some kinds of project, but not to others. As Ari asks,

“Why would somebody choose a specific box [set of methods] for their products?”

One respondent suggested that software with high security and high quality criteria should be developed using closed source methods rather than using open source.

Another stated that,

I firmly believe ‘closed’ source is best route for targeting consumers and gaining mass appeal/ acceptance.

That brings me back to the question I started with. Are there features of product development – perhaps involving security and robustness, or perhaps involving the kinds of usability that are important to mainstream consumers – to which open source methods aren’t suited?

Before answering that, I have a quick aside. I don’t believe that open source is ever a kind of magic dust that can transform a failing project into a successful project. Adopting open source, by itself, is never a guarantee of success. As Karl Fogel says in the very first sentence of Chapter 1 in his very fine book “Producing open source software: how to run a successful free software project“,

“Most free projects fail.”

Instead, you need to have other project fundamentals right, before open source is likely to work for you. (And as an aside to an aside, I believe that several of the current attempts to create mobile phone software systems using open source methods will fail.)

But the situation I’m talking about is when other project fundamentals are right. In that case, my question becomes:

Are there types of software for which an open source approach will be at odds with the other software disciplines and skills (eg security, robustness, usability…) that are required for success in that arena.

In one way, the answer is trivial. The example of Firefox resolves the debate (at least for some parameters). Firefox shows that open source methods can produce software that scores well on security, robustness, and usability.

But might Firefox be a kind of unusual exception – or (as one of the anonymous respondents to Ari Jaaksi’s blog put it) “an outlier?” Alternatively – as I myself believe – is Firefox an example of a new trend, rather than an irrelevant outlier to a more persistent trend?

Regarding usability, it’s undeniable that open source software methods grew up in environments in which developers didn’t put a high priority on ease-of-use by consumers. These developers were generally writing software for techies and other developers. So lots of open source software has indeed scored relatively poorly, historically, on usability.

But history needn’t determine the future. I’m impressed by the analysis in the fine paper “Usability and Open Source Software” by David M. Nichols and Michael B. Twidale. Here’s the abstract:

Open source communities have successfully developed many pieces of software although most computer users only use proprietary applications. The usability of open source software is often regarded as one reason for this limited distribution. In this paper we review the existing evidence of the usability of open source software and discuss how the characteristics of open-source development influence usability. We describe how existing human-computer interaction techniques can be used to leverage distributed networked communities, of developers and users, to address issues of usability.

Another very interesting paper, in similar vein, is “Why Free Software has poor usability, and how to improve it” by Matthew Paul Thomas. This paper lists no less than 15 features of open source culture which tend to adversely impact the usability of software created by that culture:

  1. Weak incentives for usability
  2. Few good designers
  3. Design suggestions often aren’t invited or welcomed
  4. Usability is hard to measure
  5. Coding before design
  6. Too many cooks
  7. Chasing tail-lights
  8. Scratching their own itch
  9. Leaving little things broken
  10. Placating people with options
  11. Fifteen pixels of fame
  12. Design is high-bandwidth, the Net is low-bandwidth
  13. Release early, release often, get stuck
  14. Mediocrity through modularity
  15. Gated development communities.

As Paul says, “That’s a long list of problems, but I think they’re all solvable”. I agree. The solutions Paul gives in his article are good starting points (and are already being adopted in some projects). In any case, many of the same problems impact closed-source development too.

In short, once usability issues are sufficiently understood by a group of developers (whether they are adopting open source or closed source methods), there’s no inherent reason why the software they create has to embody poor usability.

So much for usability. How about security? Here the situation may be a little more complex. The online book chapter “Is Open Source Good for Security?” by David Wheeler is one good starting point. Here’s the final sentence in that chapter:

…the effect on security of open source software is still a major debate in the security community, though a large number of prominent experts believe that it has great potential to be more secure

The complication is that, if you start out with software that is closed source, and then make it open source, you might get the worst of both worlds. Incidentally, that’s one reason why the source code in the Symbian Platform isn’t being open-sourced in its entirety, overnight, on the formation (subject to regulatory approval) of the Symbian Foundation. It will take some time (and the exercise of a lot of deep skill), before we can be sure we’re going to get the best of both worlds, rather than the worst of both worlds.

Blog at WordPress.com.