dw2

30 June 2015

Securing software updates

Software frequently goes wrong. That’s a fact of life whose importance is growing – becoming, so to speak, a larger fact of life. That’s for three reasons:

  1. Complex software is spreading more widely into items where, previously, it was present (if at all) only in simpler form. This includes clothing (“wearable computing”), healthcare accessories, “connected home” consumer goods, automobiles (“connected vehicles”), and numerous “Internet of Things” sensors and actuators. More software means a greater likelihood of software error – and a greater likelihood of being hacked (compromised).
  2. Software in these items is increasingly networked together, so that defects in one piece of software can have effects that ricochet unexpectedly. For example, a hacked thermostat can end up reporting industrial secrets to eavesdroppers on the other side of the planet.
  3. By design, modern-day software is frequently open – meaning that its functionality can be configured and extended by other pieces of software that plug into it. Openness provides the possibility for positive innovation, in the way that apps enhance smartphones, or new themes enhance a webpage design. But that same openness enables negative innovation, in which plug-ins subvert the core product. This type of problem arises due to flaws in the set of permissions that expose software functionality from one module to another.

All three of these factors – the intrinsic defects in software, defects in its network connectivity, and defects in permission systems – can be exploited by writers of malware. Worryingly, there’s a mushrooming cybercrime industry that creates, modifies, and deploys increasingly sophisticated malware. There can be rich pickings in this industry. The denizens of Cybercrime Inc. can turn the principles of software and automation to their advantage, resulting in mass-scale deployment of their latest schemes for deception, intrusion, subterfuge, and extortion.

I recently raised these issues in my article “Eating the world: the growing importance of software security”. In that article, I predicted an imminent sea-change in the attitude which users tend to display towards the possibility of software security vulnerabilities. The attitude will change from complacency into purposeful alarm. Companies which are slow to respond to this change in attitude will find their products discarded by users – regardless of how many “cool” features they contain. Security is going to trump functionality, in a way it hasn’t done previously.

One company that has long been aware of this trend is Redbend (which was acquired by HARMAN in summer 2015). They’ve been thinking hard for more than a dozen years about the dynamics of OTA (over the air, i.e. wireless) software updates. Software updates are as much of a fact of life as software bugs – in fact, more so. Updates deliver fixes to bugs in previous versions; they also roll out new functionality. A good architecture for efficient, robust, secure software updates is, therefore, a key differentiator:

  • The efficiency of an update means that it happens quickly, with minimal data costs, and minimal time inconvenience to users
  • The robustness of an update means that, even if the update were to be interrupted partway through, the device will remain in a usable state
  • The security of an update means that it will reliably deliver software that is valid and authentic, rather than some “Trojan horse” malware masquerading as bona-fide.

According to my email archives, my first meeting with representatives of Redbend was as long ago as December 2002. At that time, I was Executive VP at Symbian with responsibility for Partnering. Since Redbend was one of the new “Platinum Partners” of Symbian, I took the time to learn more about their capabilities.

One person I met in these initial meetings was Gil Cordova, at that time Director of Strategic Marketing at Redbend. Gil wrote to me afterwards, confirming our common view as to what lay ahead in the future:

Redbend deals with an enabling technology and solution for OTA updating of mobile devices.

Our solution enables device manufacturers and operators to update any part of the device software including OS, middleware systems and applications.

The solution is based on our patented technology for creating delta-updates which minimize the update package size ensuring it can be cost-effectively sent and stored on the device with little bandwidth and memory consumption. In addition we enable the update to occur within the device memory constraints ensuring no cost-prohibitive memory needs to be added…

OTA updates can help answer the needs of remote software repair and fixing to the device software, as well as streamline logistics when deploying devices…

At that time, some dozen years ago, the idea that mobile phones would have more and more software in them was still relatively new – and was far from being widely accepted as a good thing. But Redbend and Symbian foresaw the consequences, as in the final paragraph of Gil’s email to me:

All the above points to the fact that if software is a new paradigm in the industry then OTA updating is a very crucial and strategic issue that must be taken into account.

OTA has, indeed, been an important issue since that time. But it’s my view that the full significance is only now becoming apparent. As security is poised to “eat the world”, efficient and reliable OTA capabilities will grow yet further in importance. It will be something that more and more companies will need to include at the heart of their own product offerings. The world will insist on it.

A few days ago, I took a closer look at recent news from HARMAN connected services – in particular at its architecture for cybersecurity. I saw a great deal that I liked:

Secure Car

  • Domain isolation – to provide a strict separation between different subsystems (e.g. parts of the overall software system on a car), with the subsystems potentially running different operating systems
  • Type-1 hypervisor – to isolate different subsystems from hardware resources, except when such access is explicitly designed
  • Driver virtualization – to allow additional peripherals (such as Wi-Fi, cameras, Bluetooth, and GPS) to be added quickly into an existing device with the same secure architecture
  • Software update systems – to enable separate remote software management for the head (dashboard) unit, telematics (black-box) unit, and numerous ECUs (engine control units) – with a 100% success record in deploying updates on more than one million vehicles
  • State of the art FIPS (Federal Information Processing Standard) encryption – applied to the entirety of the update process
  • Intrusion Detection and Prevention systems – to identify and report any malicious or erroneous network activity, and to handle the risks arising before the car or any of its components suffers any ill-effect.

I know from my own background in designing software systems that this kind of all-points-considered security cannot be tacked onto an existing system. Provision for it needs to be designed in from the beginning. That’s where Redbend’s long heritage in this space shows its value.

The full benefit of taking an architectural approach to secure software updates – as opposed to trying to fashion security on top of fundamentally insecure components – is that the same architecture is capable of re-use in different domains. It’s therefore no surprise that Redbend software management solutions are available, not only for connected cars, but also for wearable computers, connected homes, and machine-to-machine (M2M) devices.

Of course, despite all these precautions, I expect the security arms race to continue. Software will continue to have bugs, and the cybercrime industry will continue to find ingenious ways to exploit these bugs. The weakest part of any security system, indeed, is frequently the humans involved, who can fall victim to social engineering. In turn, providers of security software are seeking to improve the usability of their systems, to reduce both the likelihood and the impact of human operator error.

This race probably has many laps to run, with new surprises ahead on each lap. To keep ahead, we need allies and partners who constantly look ahead, straining to discern the forthcoming new battlegrounds, and to prepare new defences in sufficient time. But we also need to avail ourselves of the best present tools, so that our businesses have the best chance of avoiding being eaten in the meantime. Figuring out which security tools really are best in class is fast becoming a vital core competency for people in ever-growing numbers of industries.

Footnote: I was inspired to write this post after discussions with some industry colleagues involved in HARMAN’s Engineering a Connected Life program. The views and opinions expressed in this post are my own and don’t necessarily represent HARMAN’s positions, strategies or opinions.

11 June 2015

Eating the world – the growing importance of software security

Security is eating the world

In August 2011, Marc Andreessen famously remarked that “software is eating the world”. Writing in the Wall Street Journal, Andreessen set out his view that society was “in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy”.

With his background as pioneering web software architect at Netscape, and with a string of successful investments under his belt at venture capital firm Andreessen-Horowitz, Andreessen was well placed to comment on the potency of software. As he observed,

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defence. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures.

He then made the following prediction:

Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

Industries to be impacted in this way, Andreessen suggested, would include entertainment, communications, recruitment, automotive, retail, energy, agriculture, finance, healthcare, education, and defence.

In the four years since the phrase was coined, “software is eating the world” has shown every sign of being a profound truth. In more and more sectors of industry, companies that lack deep expertise in software have found themselves increasingly by-passed by competitors. Software skills are no longer a “nice-to have” optional extra. They’re core to numerous aspects of product development.

But it’s time to propose a variant to the original phrase. A new set of deep skills are going to prove themselves as indispensable for ever larger numbers of industries. This time, the skills are in security. Before long, security will be eating the world. Companies whose software systems fall short on security will be driven out of business.

Dancing pigs

My claim about the growing importance of security may appear to fly in opposition to a general principle of user behaviour. This principle was described by renowned security writer Bruce Schneier in his 2000 book “Secrets and Lies”:

If J. Random Websurfer clicks on a button that promises dancing pigs on his computer monitor, and instead gets a hortatory message describing the potential dangers of the applet — he’s going to choose dancing pigs over computer security any day. If the computer prompts him with a warning screen like: “The applet DANCING PIGS could contain malicious code that might do permanent damage to your computer, steal your life’s savings, and impair your ability to have children,” he’ll click OK without even reading it. Thirty seconds later he won’t even remember that the warning screen even existed.

In other words, despite whatever users may say about the importance of security when directly asked about that question (“yes, of course I take security seriously”), in practice they put a higher priority on watching animated graphics (of flying pigs, cute kittens, celebrity wardrobe malfunctions, or whatever), and readily accept security risks in pursuit of that goal.

A review paper (PDF) published in 2009 by Cormac Herley of Microsoft Research shared findings that supported this view. Herley reports that, for example, users still typically choose the weakest passwords they can get away with, rather than making greater efforts to keep their passwords unguessable. Users also frequently ignore the advice against re-using the same passwords on different sites (so that, if there’s a security problem with any one of these sites, the user’s data on all other sites becomes vulnerable too).

Herley comments:

There are several ways of viewing this. A traditional view is that users are hopelessly lazy: in the face of dire descriptions of the threat landscape and repeated warnings, they do the minimum possible…

But by the end of his review, he offers a more sympathetic assessment:

“Given a choice between dancing pigs and security, users will pick dancing pigs every time.” While amusing, this is unfair: users are never offered security, either on its own or as an alternative to anything else. They are offered long, complex and growing sets of advice, mandates, policy updates and tips… We have shown that much of this advice does nothing to make users more secure, and some of it is harmful in its own right. Security is not something users are offered and turn down. What they are offered and do turn down is crushingly complex security advice that promises little and delivers less.

Herley’s paper concludes:

How can we help users avoid harm? This begins with a clear understanding of the actual harms they face, and a realistic understanding of their constraints. Without these we are proceeding blindly.

Exponential change

What are the “actual harms” that users face, as a result of insecure software systems or poor personal security habits?

We live in a time of rapid technology change. As software eats the world, it leaves more and more aspects of the world vulnerable to problems in the software – and vulnerable to problems in how that software is used, deployed, and updated.

As a result, the potential harm to users from poor security is constantly increasing. Users are vulnerable in new ways that they had never considered before.

Hacking embedded medical devices

For example, consider one possible unexpected side-effect of being fitted with one of the marvels of modern technology, an implantable heart pacemaker. Security researcher Barnaby Jack of IOActive gave a devastating demo at the Breakpoint conference in October 2012 of how easy it was for an outsider to interfere with the system whereby a pacemaker can be wirelessly recalibrated. The result is summed up in this Computerworld headline, “Pacemaker hack can deliver deadly 830-volt jolt”:

The flaw lies with the programming of the wireless transmitters used to give instructions to pacemakers and implantable cardioverter-defibrillators (ICDs), which detect irregular heart contractions and deliver an electric shock to avert a heart attack.

A successful attack using the flaw “could definitely result in fatalities,” said Jack…

In a video demonstration, Jack showed how he could remotely cause a pacemaker to suddenly deliver an 830-volt shock, which could be heard with a crisp audible pop.

Hacking vehicle control systems

Consider also the predicament that many car owners in Austin, Texas experienced, as a result of the actions of a disgruntled former employee of used car retail firm Texas Auto Center. As Wired reported,

More than 100 drivers in Austin, Texas found their cars disabled or the horns honking out of control, after an intruder ran amok in a web-based vehicle-immobilization system normally used to get the attention of consumers delinquent in their auto payments.

Police with Austin’s High Tech Crime Unit on Wednesday arrested 20-year-old Omar Ramos-Lopez, a former Texas Auto Center employee who was laid off last month, and allegedly sought revenge by bricking the cars sold from the dealership’s four Austin-area lots.

Texas Auto Center had included some innovative new technology in the cars they sold:

The dealership used a system called Webtech Plus as an alternative to repossessing vehicles that haven’t been paid for. Operated by Cleveland-based Pay Technologies, the system lets car dealers install a small black box under vehicle dashboards that responds to commands issued through a central website, and relayed over a wireless pager network. The dealer can disable a car’s ignition system, or trigger the horn to begin honking, as a reminder that a payment is due.

The beauty of the system is that it allows a greater number of customers to purchase cars, even when their credit history looks poor. Rather than extensive up-front tests of the credit-worthiness of a potential purchaser, the system takes advantage of the ability to immobilise a car if repayments should cease. However, as Wired reports,

Texas Auto Center began fielding complaints from baffled customers the last week in February, many of whom wound up missing work, calling tow trucks or disconnecting their batteries to stop the honking. The troubles stopped five days later, when Texas Auto Center reset the Webtech Plus passwords for all its employee accounts… Then police obtained access logs from Pay Technologies, and traced the saboteur’s IP address to Ramos-Lopez’s AT&T internet service, according to a police affidavit filed in the case.

Omar Ramos-Lopez had lost his position at Texas Auto Center the previous month. Following good security practice, his own account on the Webtech Plus system had been disabled. However, it seems he gained access by using an account assigned to a different employee.

At first, the intruder targeted vehicles by searching on the names of specific customers. Then he discovered he could pull up a database of all 1,100 Auto Center customers whose cars were equipped with the device. He started going down the list in alphabetical order, vandalizing the records, disabling the cars and setting off the horns.

His manager ruefully remarked, “Omar was pretty good with computers”.

Hacking thermostats and lightbulbs

Finally, consider a surprise side-effect of attaching a new thermostat to a building. Modern thermostats exchange data with increasingly sophisticated systems that control heating, ventilation, and air conditioning. In turn, these systems can connect into corporate networks, which contain email archives and other confidential documents.

The Washington Chamber of Commerce discovered in 2011 that a thermostat in a townhouse they used was surreptitiously communicating with an Internet address somewhere in China. All the careful precautions of the Chamber’s IT department, including supervision of the computers and memory sticks used by employees, to guard against the possibility of such data seepage, was undone by this unexpected security vulnerability in what seemed to be an ordinary household object. Information that leaked from the Chamber potentially included sensitive information about US policy for trade with China, as well as other key IP (Intellectual Property).

It’s not only thermostats that have much greater network connectivity these days. Toasters, washing machines, and even energy-efficient lightbulbs contain surprising amounts of software, as part of the implementation of the vision of “smart homes”. And in each case, it opens the potential for various forms of espionage and/or extortion. Former CIA Director David Petraeus openly rejoiced in that possibility, in remarks noted in a March 2012 Wired article “We’ll spy on you through your dishwasher”:

Items of interest will be located, identified, monitored, and remotely controlled through technologies such as RFID, sensor networks, tiny embedded servers, and energy harvesters — all connected to the next-generation internet using abundant, low-cost, and high-power computing…

Transformational is an overused word, but I do believe it properly applies to these technologies, particularly to their effect on clandestine tradecraft.

To summarise: smart healthcare, smart cars, and smart homes, all bring new vulnerabilities as well as new benefits. The same is true for other fields of exponentially improving technology, such as 3D printing, unmanned aerial vehicles (“drones”), smart toys, and household robots.

The rise of robots

Sadly, malfunctioning robots have already been involved in a number of tragic fatalities. In May 2009, an Oerlikon MK5 anti-aircraft system was part of the equipment used by 5,000 South African troops in a large-scale military training exercise. On that morning, the controlling software suffered what a subsequent enquiry would call a “glitch”. Writing in the Daily Mail, Gavin Knight recounted what happened:

The MK5 anti-aircraft system, with two huge 35mm cannons, is essentially a vast robotic weapon, controlled by a computer.

While it’s one thing when your laptop freezes up, it’s quite another when it is controlling an auto-loading magazine containing 500 high-explosive rounds…

“There was nowhere to hide,” one witness stated in a report. “The rogue gun began firing wildly, spraying high explosive shells at a rate of 550 a minute, swinging around through 360 degrees like a high-pressure hose.”

By the time the robot has emptied its magazine, nine soldiers lie dead. Another 14 are seriously injured.

Deaths due to accidents involving robots have also occurred throughout the United States. A New York Times article in June 2014 gives the figure of “at least 33 workplace deaths and injuries in the United States in the last 30 years.” For example, in a car factory in December 2001,

An employee was cleaning at the end of his shift and entered a robot’s unlocked cage. The robot grabbed his neck and pinned the employee under a wheel rim. He was asphyxiated.

And in an aluminium factory in February 1996,

Three workers were watching a robot pour molten aluminium when the pouring unexpectedly stopped. One of them left to flip a switch to start the pouring again. The other two were still standing near the pouring operation, and when the robot restarted, its 150-pound ladle pinned one of them against the wall. He was killed.

To be clear, in none of these cases is there any suggestion of foul play. But to the extent that robots can be remotely controlled, the possibility arises for industrial vandalism.

Indeed, one of the most infamous cases of industrial vandalism (if that is the right description in this case) is the way in which the Stuxnet computer worm targeted the operation of fast-spinning centrifuges inside the Iranian programme to enrich uranium. Stuxnet took advantage of at least four so-called “zero-day security vulnerabilities” in Microsoft Windows software – vulnerabilities that Microsoft did not know about, and for which no patches were available. When the worm found itself installed on computers with particular programmable logic controllers (PLCs), it initiated a complex set of monitoring and alteration of the performance of the equipment attached to the PLC. The end result was that the centrifuges tore themselves apart, reportedly setting back the Iranian nuclear programme by a number of years.

Chillingly, what Stuxnet could do to centrifuges, variant software configurations could have similar effects on other industrial infrastructure – including energy and communication grids.

Therefore, whereas there is much to celebrate about the growing connectivity of “the Internet of Things”, there is also much to fear about it.

The scariest book

Many of the examples I’ve briefly covered above – the hacking of embedded medical devices, vehicle control systems, and thermostats and lightbulbs – as well as the upsides and downsides of “the rise of robots” – are covered in greater detail in a book I recently finished reading. The book is “Future Crimes”, by former LAPD police officer Marc Goodman. Goodman has spent the last twenty years working on cyber security risks with organisations such as Interpol, NATO, and the United Nations.

The full title of Goodman’s book is worth savouring: “Future Crimes: Everything is connected, everything is vulnerable, and what we can do about it.” Singularity 1on1 podcast interview Nikola Danaylov recently described Future Crimes as “the scariest book I have ever read in my life”. That’s a sentiment I fully understand. The book has a panoply of “Oh my god” moments.

What the book covers is not only the exponentially growing set of vulnerabilities that our exponentially connected technology brings in its wake, but also the large set of people who may well be motivated to exploit these vulnerabilities. This includes home and overseas government departments, industrial competitors, disgruntled former employees, angry former friends and spouses, ideology-fuelled terrorists, suicidal depressives, and a large subset of big business known as “Crime Inc”. Criminals have regularly been among the very first to adopt new technology – and it will be the same with the exploitation of new generations of security vulnerabilities.

There’s much in Future Crimes that is genuinely frightening. It’s not alone in the valuable task of raising public awareness of increasing security vulnerabilities. I also recommend Kim Zetter’s fine investigative work “Countdown To Zero Day: Stuxnet and the launch of the world’s first digital weapon”. Some of the same examples appear in both books, providing added perspective. In both cases the message is clear – the threats from cybersecurity are likely to mushroom.

On the positive front, technology can devise countermeasures as well as malware. There has long been an arms race between software virus writers and software antivirus writers. This arms race is now expanding into many new areas.

If the race is lost, it means that security will eat the world in a bad way: the horror stories that are told throughout both Future Crimes and Countdown To Zero Day will magnify in both number and scope. In that future scenario, people will look back fondly on the present day as a kind of innocent paradise, in which computers and computer-based systems generally worked reliably (despite occasional glitches). Safe, clean computer technology might become as rare as bottled oxygen in an environment where smog and pollution dominates – something that is only available in small quantities, to the rich and powerful.

If the race is won, there will still be losers. I’m not just referring to Crime Inc, and other would-be exploiters of security vulnerabilities, whose ambitions will be thwarted. I’m referring to all the companies whose software will fall short of the security standards of the new market leaders. These are companies who pay lip service to the importance of robust, secure software, but whose products in practice disappoint customers. By that time, indeed, customers will long have moved on from preferring dancing pigs to good security. The prevalence of bad news stories – in their daily social media traffic – will transform their appreciation of the steps they need to take to remain as safe as possible. Their priorities will have changed. They’ll be eagerly scouring reports as to which companies have world-class software security, and which companies, on the other hand, have products that should be avoided. Companies in the former camp will eat those in the latter camp.

Complications with software updates

As I mentioned above, there can be security vulnerabilities, not only intrinsic in a given piece of software, but also in how that software is used, deployed, and updated. I’ll finish this article by digging more deeply into the question of software updates. These updates have a particularly important role in the arms race between security vulnerabilities and security improvements.

Software updates are a key part of modern technological life. These updates deliver new functionality to users – such as a new version of a favourite app, or an improved user interface for an operating system. They also deliver security fixes, along with other bug fixes. In principle, as soon as possible after a major security vulnerability has been identified and analysed, the vendor will make available a fix to that programming error.

However, updates are something that many users dislike. On the one hand, they like receiving improved functionality. But they fear on the other hand that:

  • The upgrade will be time-consuming, locking them out of their computer systems at a time when they need to press on with urgent work
  • The upgrade will itself introduce new bugs, and break familiar patterns of how they use the software
  • Some of their applications will stop working, or will work in strange ways, after the upgrade.

The principle of “once bitten, twice shy” applies here. One bad experience with upgrade software – such as favourite add-on applications getting lost in the process – may prejudice users against accepting any new upgrades.

My own laptop recently popped up an invitation for me to reserve a free upgrade from its current operating system – Windows 7.1 – to the forthcoming Windows 10. I confess that I have yet to click the “yes, please reserve this upgrade” button. I fear, indeed, that some of the legacy software on my laptop (including apps that are more than ten years old, and whose vendors no longer exist) will become dysfunctional.

The Android operating system for smartphones faces a similar problem. New versions of the operating system, which include fixes to known security problems, often fail to make their way onto users of Android phones. In some cases, this is because the phones are running a reconfigured version of Android, which includes modifications introduced by a phone manufacturer and/or network operator. Any update has to wait until similar reconfigurations have been applied to the new version of the operating system – and that can take a long time, due to reluctance on the part of the phone manufacturer or network operator. In other cases, it’s simply because users decline to accept an Android upgrade when it is offered to them. Once bitten, twice shy.

Accordingly, there’s competitive advantage available, to any company that makes software upgrades as smooth and reliable as possible. This will become even more significant, as users grow in their awareness of the need to have security vulnerabilities in their computer systems fixed speedily.

But there’s a very awkward problem lurking around the upgrade process. Computer systems can sometimes be tricked into installing malicious software, whilst thinking it is a positive upgrade. In other words, the upgrade process can itself be hacked. For example, at the Black Hat conference in July 2009, IOActive security researcher Mike Davis demonstrated a nasty vulnerability in the software update mechanism in the smart electricity meters that were to be installed in homes throughout the Pacific North West of the United States.

For a riveting behind-the-scenes account of this particular research, see the book Countdown To Zero Day. In brief, Davis found a way to persuade a smart meter that it was being offered a software upgrade by a neighbouring, trusted smart meter, whereas it was in fact receiving software from an external source. This subterfuge was accomplished by extracting the same network encryption key that was hard-wired into every smart meter in the collection, and then presenting that encryption key as apparent (but bogus) evidence that the communication could be trusted. Once the meter had installed the upgrade, the new software could disable the meter from responding to any further upgrades. It could also switch off any electricity supply to the home. As a result, the electricity supplier would be obliged to send engineers to visit every single house that had been affected by the malware. In the simulated demo shown by Davis, this was as many as 20,000 separate houses within just a 24 hour period.

Uncharitably, we might think to ourselves that an electricity supplier is probably the kind of company to make mistakes with its software upgrade mechanism. As Mike Davis put it, “the guys that built this meter had a short-term view of how it would work”. We would expect, in contrast, that a company whose core business was software (and which had been one of the world’s leading software companies for several decades) would have no such glitches in its system for software upgrades.

Unexpectedly, one of the exploits utilised by Stuxnet team was a weakness in part of the Microsoft Update system – a part that had remained unchanged for many years. The exploit was actually used by a piece of malware, known as Flame which shared many characteristics with Stuxnet. Mikko Hyppönen, Chief Research Officer of Finnish antivirus firm F-Secure, reported the shocking news as follows in a corporate blogpost tellingly entitled “Microsoft Update and The Nightmare Scenario”:

About 900 million Windows computers get their updates from Microsoft Update. In addition to the DNS root servers, this update system has always been considered one of the weak points of the net. Antivirus people have nightmares about a variant of malware spoofing the update mechanism and replicating via it.

Turns out, it looks like this has now been done. And not by just any malware, but by Flame…

Flame has a module which appears to attempt to do a man-in-the-middle attack on the Microsoft Update or Windows Server Update Services system. If successful, the attack drops a file called WUSETUPV.EXE to the target computer.

This file is signed by Microsoft with a certificate that is chained up to Microsoft root.

Except it isn’t signed really by Microsoft.

Turns out the attackers figured out a way to misuse a mechanism that Microsoft uses to create Terminal Services activation licenses for enterprise customers. Surprisingly, these keys could be used to also sign binaries…

Having a Microsoft code signing certificate is the Holy Grail of malware writers. This has now happened.

Hyppönen’s article ends with some “good news in the bad news” which nevertheless sounds a strong alarm about similar things going wrong (with worse consequences) in the future:

I guess the good news is that this wasn’t done by cyber criminals interested in financial benefit. They could have infected millions of computers. Instead, this technique has been used in targeted attacks, most likely launched by a Western intelligence agency.

How not to be eaten

Despite the threats that I’ve covered above, I’m optimistic that software security and software updates can be significantly improved in the months and years ahead. In other words, there’s plenty of scope for improvements in the quality of software security.

One reason for this optimism is that I know that smart people have been thinking hard about these topics for many years. Good solutions are already available, ready for wider deployment, in response to stronger market readiness for such solutions.

But it will take more than technology to win this arms race. It will take political resolve. For too long, software companies have been able to ship software that has woefully substandard security. For too long, companies have prioritised dancing pigs over rock-hard security. They’ve written into their software licences that they accept no liability for problems arising from bugs in their software. They’ve followed, sometimes passionately, and sometimes half-heartedly, the motto from Facebook’s Mark Zuckerberg that software developers should “move fast and break things”.

That kind of behaviour may have been appropriate in the infancy of software. No longer.

Move fast and break things

21 March 2013

The burning need for better supra-national governance

International organisations have a bad reputation these days. The United Nations is widely seen as ineffective. There’s a retreat towards “localism”: within Britain, the EU is unpopular; within Scotland, Britain is unpopular. And any talk of “giving up sovereignty” is deeply unpopular.

However, lack of effective international organisations and supra-national governance is arguably the root cause of many of the biggest crises facing humanity in the early 21st century.

That was the thesis which Ian Goldin, Oxford University Professor of Globalisation and Development, very ably shared yesterday evening in the Hong Kong Theatre in the London School of Economics. He was quietly spoken, but his points hit home strongly. I was persuaded.

DividedNationsThe lecture was entitled Divided Nations: Why global governance is failing and what we can do about it. It coincided with the launch of a book with the same name. For more details of the book, see this blogpost on the website of the Oxford Martin School, where Ian Goldin holds the role of Director.

It’s my perception that many technology enthusiasts, futurists, and singularitarians have a blind spot when it comes to the topic of the dysfunction of current international organisations. They tend to assume that technological improvements will automatically resolve the crises and risks facing society. Governments and regulators should ideally leave things well alone – so the plea goes.

My own view is that smarter coordination and regulation is definitely needed – even though it will be hard to set that up. Professor Goldin’s lecture amply reinforced that view.

On the train home from the lecture, I downloaded the book onto my Kindle. I recommend anyone who is serious about the future of humanity to read it. Drawing upon the assembled insights and wisdom of the remarkable set of scholars at the Oxford Martin School, in addition to his own extensive experience in the international scene, Professor Goldin has crystallised state-of-the-art knowledge regarding the pressing urgency, and options, for better supra-national governance.

In the remainder of this blogpost, I share some of the state-of-consciousness notes that I typed while listening to the lecture. Hopefully this will give a flavour of the hugely important topics covered. I apologise in advance for any errors introduced in transcription. Please see the book itself for an authoritative voice. See also the live tweet stream for the meeting, with the hash-tag #LSEGoldin.

What keeps Oxford Martin scholars awake at night

The fear that no one is listening. The international governance system is in total gridlock. There are failures on several levels:

  • Failure of governments to lift themselves to a higher level, instead of being pre-occupied by local, parochial interests
  • Failure of electorates to demand more from their governments
  • Failure of governments for not giving clearer direction to the international institutions.

Progress with international connectivity

80 countries became democratic in the 1990s. Only one country in the world today remains disconnected – North Korea.

Over the last few decades, the total global population has increased, but the numbers in absolute poverty have decreased. This has never happened before in history.

So there are many good aspects to the increase in the economy and inter-connectivity.

However, economists failed to think sufficiently far ahead.

What economists should have thought about: the global commons

What was rational for the individuals and for national governments was not rational for the whole world.

Similar problems exist in several other fields: antibiotic resistance, global warming, the markets. He’ll get to these shortly.

The tragedy of the commons is that, when everyone does what is rational for them, everyone nevertheless ends up suffering. The common resource is not managed.

The pursuit of profits is a good thing – it has worked much better than central planning. But the result is irrationality in aggregate.

The market alone cannot provide a response to resource allocation. Individual governments cannot provide a solution either. A globally coordinated approach is needed.

Example of several countries drawing water from the Aral Sea – which is now arid.

That’s what happens when nations do the right thing for themselves.

The special case of Finance

Finance is by far the most sophisticated of the resource management systems:

  • The best graduates go into the treasury, the federal reserve, etc
  • They are best endowed – the elite organisation
  • These people know each other – they play golf together.

If even the financial bodies can’t understand their own system, this has black implications for other systems.

The growth of the financial markets had two underbellies:

  1. Growing inequality
  2. Growing potential for systemic risk

The growing inequality has actually led to lobbying that exaggerates inequality even more.

The result was a “Race to the bottom”, with governments being persuaded to get out of the regulation of things that actually did need to be regulated.

Speaking after the crisis, Hank Paulson, US Treasury Secretary and former CEO of Goldman Sachs, in effect said “we just did not understand what was happening” – even with all the high-calibre people and advice available to him. That’s a shocking indictment.

The need for regulation

Globalisation requires regulation, not just at the individual national level, but at an international level.

Global organisations are weaker now than in the 1990s.

Nations are becoming more parochial – the examples of UK (thinking of leaving EU) and Scotland (thinking of leaving UK) are mirrored elsewhere too.

Yes, integration brings issues that are hard to control, but the response to withdraw from integration is terribly misguided.

We cannot put back the walls. Trying to withdraw into local politics is dreadfully misguided.

Five examples

His book has five examples as illustrations of his general theme (and that’s without talking in this book about poverty, or nuclear threats):

  1. Finance
  2. Pandemics
  3. Migration
  4. Climate change
  5. Cyber-security

Many of these problems arise from the success of globalisation – the extraordinary rise in incomes worldwide in the last 25 years.

Pandemics require supra-national attention, because of increased connectivity:

  • The rapid spread of swine flu was correlated tightly with aircraft travel.
  • It will just take 2 days for a new infectious disease to travel all the way round the world.

The idea that you can isolate yourself from the world is a myth. There’s little point having a quarantine regime in place in Oxford if a disease is allowed to flourish in London. The same applies between countries, too.

Technology developments exacerbate the problem. DNA analysis is a good thing, but the capacity to synthesise diseases has terrible consequences:

  • There’s a growing power for even a very small number of individuals to cause global chaos, e.g. via pathogens
  • Think of something like Waco Texas – people who are fanatical Armageddonists – but with greater technical skills.

Cyber-security issues arise from the incredible growth in network connectivity. Jonathan Zittrain talks about “The end of the Internet”:

  • The Internet is not governed by governments
  • Problems to prosecute people, even when we know who they are and where they are (but in a different jurisdiction)
  • Individuals and small groups could destabilise whole Internet.

Migration is another “orphan issue”. No international organisation has the authority to deal with it:

  • Control over immigration is, in effect, an anarchic, bullying system
  • We have very bad data on migration (even in the UK).

The existing global institutions

The global institutions that we have were a response to post-WW2 threats.

For a while, these institutions did well. The World Bank = Bank for reconstruction. It did lead a lot of reconstruction.

But over time, we became complacent. The institutions became out-dated and lost their vitality.

The recent financial crisis shows that the tables have been turned round: incredible scene of EU taking its begging bowl to China.

The tragedy is that the lessons well-known inside the existing institutions have not been learned. There are lessons about the required sequencing of reforms, etc. But with the loss of vitality of these institutions, the knowledge is being lost.

The EU has very little bandwidth for managing global affairs. Same as US. Same as Japan. They’re all preoccupied by local issues.

The influence of the old G7 is in decline. The new powers are not yet ready to take over the responsibility: China, Russia, India, Indonesia, Brazil, South Africa…

  • The new powers don’t actually want this responsibility (different reasons for different countries)
  • China, the most important of the new powers, has other priorities – managing their own poverty issues at home.

The result is that no radical reform happens, of the international institutions:

  • No organisations are killed off
  • No new ones created
  • No new operating principles are agreed.

Therefore the institutions remain ineffective. Look at the lack of meaningful progress towards solving the problems of climate change.

He has been on two Bretton Woods reform commissions, along with “lots of wonderfully smart, well-meaning people”. Four prime ministers were involved, including Gordon Brown. Kofi Annan received the report with good intentions. But no actual reform of UN took place. Governments actually want these institutions to remain weak. They don’t want to give up their power.

It’s similar to the way that the UK is unwilling to give up power to Brussels.

Sleep-walking

The financial crisis shows what happens when global systems aren’t managed:

  • Downwards spiral
  • Very hard to pull it out afterwards.

We are sleep-walking into global crises. The financial crisis is just a foretaste of what is to come. However, this need not be the case.

A positive note

He’ll finish the lecture by trying to be cheerful.

Action on global issues requires collective action by both citizens and leaders who are not afraid to relinquish power.

The good news:

  • Citizens are more connected than ever before
  • Ideologies that have divided people in the past are reducing in power
  • We can take advantage of the amplification of damage to reputation that can happen on the Internet
  • People can be rapidly mobilised to overturn bad legislation.

Encouraging example of SOPA debate in US about aspects of control of the Internet:

  • 80 million people went online to show their views, in just two days
  • Senate changed their intent within six hours.

Some good examples where international coordination works

  • International plane travel coordination (air traffic control) is example that works very well – it’s a robust system
  • Another good example: the international postal system.

What distinguishes the successes from the failures:

  • In the Air Traffic Control case, no one has a different interest
  • But in other cases, there are lots of vested interest – neutering the effectiveness of e.g. the international response to the Syrian crisis
  • Another troubling failure example is what happened in Iraq – it was a travesty of what the international system wanted and needed.

Government leaders are afraid that electorate aren’t ready to take a truly international perspective. To be internationalist in political circles is increasingly unfashionable. So we need to change public opinion first.

Like-minded citizens need to cooperate, building a growing circle of legitimacy. Don’t wait for the global system to play catch-up.

In the meantime, true political leaders should find some incremental steps, and should avoid excuse of global inaction.

Sadly, political leaders are often tied up addressing short-term crises, but these short-term crises are due to no-one satisfactorily addressing the longer-term issues. With inaction on the international issues, the short-term crises will actually get worse.

Avoiding the perfect storm

The scenario we face for the next 15-20 years is “perfect storm with no captain”.

He calls for a “Manhattan project” for supra-national governance. His book is a contribution to initiating such a project.

He supports the subsidiarity principle: decisions should be taken at the most local level possible. Due to hyper-globalisation, there are fewer and fewer things that it makes sense to control at the national level.

Loss of national sovereignty is inevitable. We can have better sovereignty at the global level – and we can influence how that works.

The calibre of leaders

Example of leader who consistently took a global perspective: Nelson Mandela. “Unfortunately we don’t have many Mandelas around.”

Do leaders owe their power bases with electorates because they are parochial? The prevailing wisdom is that national leaders have to shy away from taking a global perspective. But the electorate actually have more wisdom. They know the financial crisis wasn’t just due to bankers in Canary Wharf having overly large bonuses. They know the problems are globally systemic in nature, and need global approaches to fix them.

ian goldin

29 April 2012

My brief skirmish with Android malware

Filed under: Android, deception, malware, security — David Wood @ 2:19 pm

The smartphone security issue is going to run and run. There’s an escalating arms race, between would-be breakers of security and would-be defenders. The race involves both technology engineering and social engineering.

There is a lot at stake:

  • The numbers of users of smartphones continues to rise
  • The amount of sensitive data carried by a typical user on their smartphone (or accessible via credentials on their smartphone) continues to rise
  • Users increasingly become accustomed to the idea of downloading and installing applications on their mobile devices
  • Larger numbers of people turn their minds to crafting ways to persuade users to install apps against their better interest – apps that surreptitiously siphon off data and/or payments

In that context, I offer the following cautionary tale.

This afternoon, I unexpectedly ran into an example of this security arm race. I was minding my own business, doing what lots of people are doing in the UK these days – checking the weather forecast.

My Samsung Galaxy Note, which runs Android, came with an AccuWeather widget pre-installed on the default homescreen:

Clicking on the widget brings up a larger screen, with more content:

Clicking the ‘More’ button opens a web-browser, positioned to a subpage of m.accuweather.com.  I browsed a few screens of different weather information, and then noticed an inviting message near the bottom of the screen:

  • Turbo Battery Boost – Android System Update

I was curious, and decided to see where that link would lead.  On first glance, it appeared to take me into the Android Marketplace:

The reviews looked positive. Nearly two million downloads, with average rating around 4.5 stars. As someone who finds I need to recharge the battery in my Android midway every day, I could see the attraction of the application.

As I was weighing up what to do next, another alert popped up on the screen:

By this stage, I was fairly sure that something fishy was going on. I felt sure that, if there really was a breakthrough in battery management software for Android, I would have heard about it via other means. But by now I was intrigued, so I decided to play along for a while, to see how the story unfolded.

Clicking ‘Next’ immediately started downloading the app:

which was immediately followed by more advice on what I should do next, including the instruction to configure Android to accept updates from outside the Android Market:

Sure enough, the notifications area now contained a downloaded APK file, temptingly labelled “tap to start”:

A risk-averse person would probably have stopped at that point, fearful of what damage the suspicious-looking APK might wreak on my phone. But I had enough confidence in the Android installation gateway to risk one more click:

That’s a heck of a lot of permissions, but it’s nothing unusual. Many of the other apps I’ve installed recently have requested what seemed like a similar range of permissions. The difference in this case was that I reasoned that I had little trust in the origin of this latest application.

Even though the initial ad had been served up on the website of a reputable company, AccuWeather, and implied some kind of endorsement from AccuWeather for this application, I doubted that any such careful endorsement had taken place. Probably the connection via the AccuWeather webpage and the ads shown in it is via some indirect broker.

Anyway, I typed “Android BatteryUpgrade” into a Google search bar, and quickly found various horror stories.

For example, from a PCWorld article by Tom Spring, “Sleazy Ads on Android Devices Push Bogus ‘Battery Upgrade’ Warnings“:

Sketchy ads promote battery-saver apps for Android, but security experts say the programs are really designed to steal your data–or your money

Scareware has gone mobile: Users of Android devices are starting to see sleazy ads warning that they need to upgrade their device’s battery. The supposed battery-saver apps that those ads prod you to download, however, could endanger your privacy or siphon money from your wallet–and generally they’ll do nothing to improve your gadget’s battery life…

“These ads cross a line,” says Andrew Brandt, director of threat research for Solera Networks. It’s one thing to market a worthless battery app, he says, but another to scare or trick people into installing a program they don’t need.

The ads are similar to scareware marketing tactics that have appeared on PCs: Such ads pop up on desktops or laptops, warning that your computer is infected and advising you to download a program to fix the problem. In many cases those rogue system utilities and antivirus products are merely disguises for software that spies on users.

Why use battery ads as a ploy? They tap into a common anxiety, Brandt says. Phone users aren’t yet concerned about viruses on their phones, but they are worried about their battery being sucked dry.

Brandt says that one Android battery app, called both Battery Doctor and Battery Upgrade, is particularly problematic: Not only does it not upgrade a battery or extend a charge, but when it’s installed and unlocked, it harvests the phone’s address book, the phone number, the user’s name and email address, and the phone’s unique identifying IMEI number. With a phone user’s name, IMEI, and wireless account information, an attacker could clone the phone and intercept calls and SMS messages, or siphon money from a user by initiating premium calls and SMS services. Once the battery app is installed the program sends the phone ads that appear in the drop down status bar of the phone at all times – whether the app is running or not. Lastly it periodically transmits changes to the user’s private information and phone-hardware details to its servers…

Now on the one hand, Android deserves praise for pointing out to the user (me, in this case) that the application was requesting lots of powerful capabilities. On the other hand, it’s likely that at least some users would just think, “click, click, yes I really do want to install this, click, click”, having been desensitised to the issue by having installed lots of other apps in seemingly similar ways in the past.

Buyer beware. Especially if the cost is zero – and if the origin of the application cannot be trusted.

Footnote: Now that I’m paying more attention, I can see lots of other “sleazy” (yes, that’s probably the right word) advertisements on AccuWeather’s mobile webpages.

30 December 2011

Factors slowing the adoption of tablet computers in hospital

Filed under: Connected Health, mHealth, security, tablets, usability — David Wood @ 12:35 pm

Tablet computers seem particularly well suited to usage by staff inside hospitals.  They’re convenient and ergonomic.  They put huge amounts of relevant information right in the hands of clinicians, as they move around wards.  Their screens allow display of complex medical graphics, which can be manipulated in real time.  Their connectivity means that anything entered into the device can (in contrast to notes made on old-world paper pads) easily be backed up, stored, and subsequently searched.

Here’s one example, taken from an account by Robert McMillan in his fascinating Wired Enterprise article “Apple’s Secret Plan to Steal Your Doctor’s Heart“:

Elliot Fishman, a professor of radiology at Johns Hopkins… is one of a growing number of doctors who look at the iPad as an indispensable assistant to his medical practice. He studies 50 to 100 CT scans per day on his tablet. Recently, he checked up on 20 patients in his Baltimore hospital while he was traveling in Las Vegas. “What this iPad does is really extend my ability to be able to consult remotely anytime, anywhere,” he says. “Anytime I’m not at the hospital, I’m looking at the iPad.”

For some doctors at Johns Hopkins, the iPad can save an hour to an hour and a half per day — time that would otherwise be spent on collecting paper printouts of medical images, or heading to computer workstations to look them up online. Many doctors say that bringing an iPad to the bedside lets them administer a far more intimate and interactive level of care than they’d previously thought possible. Even doctors who are using an iPad for the first time often become attached, Fishman says. “Their biggest fear is what if we took it away.”

However, a thoughtful review by Jenny Gold, writing in Kaiser Health News, points out that there are many factors slowing down the adoption of tablets in hospital:

iPads have been available since April 2010, but less than one percent of hospitals have fully functional tablet systems, according to Jonathan Mack, director of clinical research and development at the West Wireless Health Institute, a San Diego-based nonprofit focused on lowering the cost of health care through new technology…

UC San Diego Health System’s experience with iPads illustrates both the promise and the challenge of using tablet technology at hospitals. Doctors there have been using the iPad since it first came out, but a year and a half later, only 50 to 70 –less than 10 percent of physicians– are using them…

Here’s a list of the factors Gold notes:

  1. The most popular systems for electronic medical records (EMRs) don’t yet make apps that allow doctors to use EMRs on a tablet the way they would on a desktop or laptop. To use a mobile device effectively requires a complete redesign of the way information is presented.  For example, the EMR system used at UC San Diego is restricted to a read-only app for the iPad, meaning it can’t be used for entering all new information.  (To get around the problem, doctors can log on through another program called Citrix. But because the product is built on a Windows platform and meant for a desktop, it can be clunky on an iPad and difficult to navigate.)
  2. Spotty wireless coverage at the hospital means doctors are logged off frequently as they move about the hospital, cutting off their connection to the EMR
  3. The iPad doesn’t fit in the pocket of a standard white lab coat. Clinicians can carry it around in a messenger bag, but it’s not convenient
  4. There are also worries about the relative newness of the technology, and whether adequate vetting has taken place over patient privacy or data security.  For example, as my former Symbian colleague Tony Naggs asks, what happens if tablets are lost or stolen?
  5. Some clinicians complain that tablet computers are difficult to type on, especially if they have “fat fingers”.

Let’s take another look at each of these factors.

1. Mobile access to EMRs

Yes, there are significant issues involved:

  • The vast number of different EMRs in use.  Black Book Rankings regularly provide a comparative evaluation of different EMRs, including a survey released on 3 November 2011 that covered 422 different systems
  • Slower computing performance on tablets, whose power inevitably lags behind desktops and laptops
  • Smaller display and lack of mouse means the UI needs to be rethought.

However, as part of an important convergence of skillsets, expert mobile software developers are learning more and more about the requirements of medical systems.  So it’s only a matter of time before mobile access to EMRs improves – including write access as well as read access.

Note this will typically require changes on both the handset and the EMR backend, to support the full needs of mobile access.

2. Intermittent wireless coverage

In parallel with improvements on software, network improvements are advancing.  Next generation WiFi networks are able to sustain connections more reliably, even in the complex topography of hospitals.

Note that the costs of a possible WiFi network upgrade need to be born in mind when hospitals are considering rolling out tablet computer solutions.

3. Sizes of devices

Tablets with different screen sizes are bound to become more widely deployed.  Sticking with a small number of screen sizes (for example, just two, as in the case with iOS) has definite advantages from a programmers point of view, since fewer different screen configurations need to be tested.  But the increasing imperative to supply devices that are intermediate in size between smartphone and iPad means that at least some developers will become smarter in supporting a wider range of screen sizes.

4. Device security

Enterprise software already has a range of solutions available to manage a suite of mobile devices.  This includes mechanisms such as remote lockdown and remote wipe, in case any device becomes lost or stolen.

With sufficient forethought, these systems can even be applied in cases when visiting physicians want to bring their own, personal handheld computer with them to work in a particular hospital.  Access to the EMR of that hospital would be gated by the device first agreeing to install some device management software which monitors the device for subsequent inappropriate usage.

5. New user interaction modes

Out of all the disincentives to wider usage of tablet computers in hospitals, the usability issue may be the most significant.

Usability paradigms that make sense for devices with dedicated keyboards probably aren’t the most optimal when part of the screen has to double as a makeshift keyboard.  This can cause the kind of frustration voiced by Dr. Joshua Lee, chief medical information officer at UC San Diego (as reported by Karen Gold):

Dr Lee occasionally carries his iPad in the hospital but says it usually isn’t worth it.  The iPad is difficult to type on, he complains, and his “fat fingers” struggle to navigate the screen. He finds the desktop or laptop computers in the hospital far more convenient. “Are you ever more than four feet away from a computer in the hospital? Nope,” he says. “So how is the tablet useful?”

But that four feet gap (and it’s probably frequently larger than that) can make all the difference to the spontaneity of an interaction.  In any case, there are many drawbacks to using a standard PC interface in a busy clinical setting.  Robert McMillan explains:

Canada’s Ottawa Hospital uses close to 3,000 iPads, and they’re popping up everywhere — in the lab coats of attending physicians, residents, and pharmacists. For hospital CIO Dale Potter, the iPad gave him a way out of a doomed “computer physician order entry” project that was being rolled out hospital-wide when he started working there in 2009.

It sounds complicated, but computerized physician order entry really means something simple: replacing the clipboards at the foot of patient’s beds with a computer, so that doctors can order tests, prescribe drugs, and check medical records using a computer rather than pen and paper. In theory, it’s a great idea, but in practice, many of these projects have failed, in part because of the clunky and impersonal PC interfaces: Who really wants to sit down and start clicking and clacking on a PC, moving a mouse while visiting a patient?

Wise use of usability experience design skills is likely to result in some very different interaction styles, in such settings, in the not-too-distant future.

Aside: if even orang utans find ways to enjoy interacting with iPads, there are surely ways to design UIs that suit busy, clumsy-fingered medical staff.

6. Process transformation

That leads to one further thought.  The biggest gains from tablet computers in hospitals probably won’t come from merely enabling clinicians to follow the same processes as before, only faster and more reliably (important though these improvements are).  More likely, the handy availability of tablets will enable clinicians to devise brand new processes – processes that were previously unthinkable.

As with all process change, there will be cultural mindset issues to address, in addition to ensuring the technology is fit for purpose.  No doubt there will be some initial resistance to new ways of doing things.  But in time, with the benefit of positive change management, good new habits will catch on.

17 September 2008

Google says OHA operators must agree to user choice on apps

Filed under: OHA, OSiM, security — David Wood @ 7:56 am

Mike Jennings, Android Developer Advocate for Google, faced a range of questions about security from attendees at the OSiM (Open Source in Mobile) conference here in Berlin this morning.

He confirmed, several times that, for Android phones:

  • “Users don’t need anyone’s permission to install apps”
  • “Developers don’t need anyone’s permission to deploy apps”.

This vision is all the more attractive, given the further point that

  • “All apps can integrate deeply with the system”.

The model, as Mike Jennings explained, is that each app needs to tell users what capabilities they will use – for example, to make a phone call, or to access the address book – and the user will decide whether to permit the application.

Questions from the audience tried to drill into that point: won’t network operators seek additional control, to protect their network, to prevent malware, or to avoid revenue bypass?

The answer is, apparently, that all operators who sign up to the OHA (Open Handset Alliance) need to agree to allow the degree of openness described above.

According to this report from TechRadar, similar questions arose in a session in London yesterday morning:

When quizzed about operators by a keen developer who branded them ‘bastards’ for hating VoIP apps and the like, Jennings replied “there’s been a lot of technological advances with Android, but there’s a lot of political advances that have taken place for [some] carriers to go with our vision of being more open,” adding that carriers were now seeing that more development was needed.

I suspect we haven’t heard the last of this. It seems implausible to me that operators will be comfortable in trusting users to this extent – including those who may be inebriated while in the pub, or who fall into an over-trusting “yes, yes, yes” rut while installing apps.

3 September 2008

Restrictions on the suitability of open source?

Filed under: Open Source, security, usability — David Wood @ 8:56 am

Are there restrictions on the suitability of open source methods? Are there kinds of software for which closed source development methods are inherently preferable and inherently more likely to succeed?

These questions are part of a recent discussion triggered by Nokia’s Ari Jaaksi’s posting “Different ways and paradigms” that looked for reasons why various open source software development methods might be applicable to some kinds of project, but not to others. As Ari asks,

“Why would somebody choose a specific box [set of methods] for their products?”

One respondent suggested that software with high security and high quality criteria should be developed using closed source methods rather than using open source.

Another stated that,

I firmly believe ‘closed’ source is best route for targeting consumers and gaining mass appeal/ acceptance.

That brings me back to the question I started with. Are there features of product development – perhaps involving security and robustness, or perhaps involving the kinds of usability that are important to mainstream consumers – to which open source methods aren’t suited?

Before answering that, I have a quick aside. I don’t believe that open source is ever a kind of magic dust that can transform a failing project into a successful project. Adopting open source, by itself, is never a guarantee of success. As Karl Fogel says in the very first sentence of Chapter 1 in his very fine book “Producing open source software: how to run a successful free software project“,

“Most free projects fail.”

Instead, you need to have other project fundamentals right, before open source is likely to work for you. (And as an aside to an aside, I believe that several of the current attempts to create mobile phone software systems using open source methods will fail.)

But the situation I’m talking about is when other project fundamentals are right. In that case, my question becomes:

Are there types of software for which an open source approach will be at odds with the other software disciplines and skills (eg security, robustness, usability…) that are required for success in that arena.

In one way, the answer is trivial. The example of Firefox resolves the debate (at least for some parameters). Firefox shows that open source methods can produce software that scores well on security, robustness, and usability.

But might Firefox be a kind of unusual exception – or (as one of the anonymous respondents to Ari Jaaksi’s blog put it) “an outlier?” Alternatively – as I myself believe – is Firefox an example of a new trend, rather than an irrelevant outlier to a more persistent trend?

Regarding usability, it’s undeniable that open source software methods grew up in environments in which developers didn’t put a high priority on ease-of-use by consumers. These developers were generally writing software for techies and other developers. So lots of open source software has indeed scored relatively poorly, historically, on usability.

But history needn’t determine the future. I’m impressed by the analysis in the fine paper “Usability and Open Source Software” by David M. Nichols and Michael B. Twidale. Here’s the abstract:

Open source communities have successfully developed many pieces of software although most computer users only use proprietary applications. The usability of open source software is often regarded as one reason for this limited distribution. In this paper we review the existing evidence of the usability of open source software and discuss how the characteristics of open-source development influence usability. We describe how existing human-computer interaction techniques can be used to leverage distributed networked communities, of developers and users, to address issues of usability.

Another very interesting paper, in similar vein, is “Why Free Software has poor usability, and how to improve it” by Matthew Paul Thomas. This paper lists no less than 15 features of open source culture which tend to adversely impact the usability of software created by that culture:

  1. Weak incentives for usability
  2. Few good designers
  3. Design suggestions often aren’t invited or welcomed
  4. Usability is hard to measure
  5. Coding before design
  6. Too many cooks
  7. Chasing tail-lights
  8. Scratching their own itch
  9. Leaving little things broken
  10. Placating people with options
  11. Fifteen pixels of fame
  12. Design is high-bandwidth, the Net is low-bandwidth
  13. Release early, release often, get stuck
  14. Mediocrity through modularity
  15. Gated development communities.

As Paul says, “That’s a long list of problems, but I think they’re all solvable”. I agree. The solutions Paul gives in his article are good starting points (and are already being adopted in some projects). In any case, many of the same problems impact closed-source development too.

In short, once usability issues are sufficiently understood by a group of developers (whether they are adopting open source or closed source methods), there’s no inherent reason why the software they create has to embody poor usability.

So much for usability. How about security? Here the situation may be a little more complex. The online book chapter “Is Open Source Good for Security?” by David Wheeler is one good starting point. Here’s the final sentence in that chapter:

…the effect on security of open source software is still a major debate in the security community, though a large number of prominent experts believe that it has great potential to be more secure

The complication is that, if you start out with software that is closed source, and then make it open source, you might get the worst of both worlds. Incidentally, that’s one reason why the source code in the Symbian Platform isn’t being open-sourced in its entirety, overnight, on the formation (subject to regulatory approval) of the Symbian Foundation. It will take some time (and the exercise of a lot of deep skill), before we can be sure we’re going to get the best of both worlds, rather than the worst of both worlds.

8 July 2008

Taming the security risks of going open source

Filed under: descriptors, Open Source, security — David Wood @ 5:05 pm

The Wireless Informatics Forum asks (here and here),

Will an open source model expose Symbian’s security flaws?

I wonder what security implications are being presented to Symbian? In the computing world there’s plenty of debate about the impact of opening up previously proprietary code. The primary concern being that an open source model exposes code not only to benevolent practitioners but also to malevolent attackers…

With much of the mobile industry steering towards m-commerce initiatives, potential security risks must be considered…

How much of the legacy Symbian code will be scrapped and built from scratch according to open source best practice?

First, I agree with the cardinal importance of security, and share the interest in providing rock solid enablers for m-commerce initiatives.

But I’m reasonably optimistic that the Symbian codebase is broadly in a good state, and won’t need significant re-writes. That’s for three reasons:

  1. Security is something that gets emphasised all the time to Symbian OS developers. The whole descriptor system for handling text buffers was motivated, in part, by a desire to avoid buffer overrun errors – see my May 2006 article “The keystone of security“.
  2. Also, every now and then, Symbian engineers have carried out intense projects to review the codebase, searching high and low for lurking defects.
  3. Finally, Symbian OS code has been available for people from many companies to look at for many years – these are people with CustKit or DevKit licenses. So we’ve already had at least some of the benefits of an open source mode of operation.

On the other hand, there’s going to be an awful lot of code in the overall Symbian Foundation Platform – maybe 30+ million LOC. And that code comes from many different sources, and was written under different cultures and with different processes. For that reason, we’ve said it could be up to two years before the entire codebase is released as Open Source. (As my colleague John Forsysth explains, in the section entitled “Why not open source on day 1?”, there are other reasons for wanting to take time over this whole process.) Of course we’d like to go faster, but we don’t at this stage want to over-promise.

So to answer the question, I expect the lion’s share of the Symbian codebase to stay in place during the migration, no doubt with some tweaks made here and there. Time will tell how much of the peripheral pieces of code need to be re-written.

Blog at WordPress.com.