dw2

30 June 2015

Securing software updates

Software frequently goes wrong. That’s a fact of life whose importance is growing – becoming, so to speak, a larger fact of life. That’s for three reasons:

  1. Complex software is spreading more widely into items where, previously, it was present (if at all) only in simpler form. This includes clothing (“wearable computing”), healthcare accessories, “connected home” consumer goods, automobiles (“connected vehicles”), and numerous “Internet of Things” sensors and actuators. More software means a greater likelihood of software error – and a greater likelihood of being hacked (compromised).
  2. Software in these items is increasingly networked together, so that defects in one piece of software can have effects that ricochet unexpectedly. For example, a hacked thermostat can end up reporting industrial secrets to eavesdroppers on the other side of the planet.
  3. By design, modern-day software is frequently open – meaning that its functionality can be configured and extended by other pieces of software that plug into it. Openness provides the possibility for positive innovation, in the way that apps enhance smartphones, or new themes enhance a webpage design. But that same openness enables negative innovation, in which plug-ins subvert the core product. This type of problem arises due to flaws in the set of permissions that expose software functionality from one module to another.

All three of these factors – the intrinsic defects in software, defects in its network connectivity, and defects in permission systems – can be exploited by writers of malware. Worryingly, there’s a mushrooming cybercrime industry that creates, modifies, and deploys increasingly sophisticated malware. There can be rich pickings in this industry. The denizens of Cybercrime Inc. can turn the principles of software and automation to their advantage, resulting in mass-scale deployment of their latest schemes for deception, intrusion, subterfuge, and extortion.

I recently raised these issues in my article “Eating the world: the growing importance of software security”. In that article, I predicted an imminent sea-change in the attitude which users tend to display towards the possibility of software security vulnerabilities. The attitude will change from complacency into purposeful alarm. Companies which are slow to respond to this change in attitude will find their products discarded by users – regardless of how many “cool” features they contain. Security is going to trump functionality, in a way it hasn’t done previously.

One company that has long been aware of this trend is Redbend (which was acquired by HARMAN in summer 2015). They’ve been thinking hard for more than a dozen years about the dynamics of OTA (over the air, i.e. wireless) software updates. Software updates are as much of a fact of life as software bugs – in fact, more so. Updates deliver fixes to bugs in previous versions; they also roll out new functionality. A good architecture for efficient, robust, secure software updates is, therefore, a key differentiator:

  • The efficiency of an update means that it happens quickly, with minimal data costs, and minimal time inconvenience to users
  • The robustness of an update means that, even if the update were to be interrupted partway through, the device will remain in a usable state
  • The security of an update means that it will reliably deliver software that is valid and authentic, rather than some “Trojan horse” malware masquerading as bona-fide.

According to my email archives, my first meeting with representatives of Redbend was as long ago as December 2002. At that time, I was Executive VP at Symbian with responsibility for Partnering. Since Redbend was one of the new “Platinum Partners” of Symbian, I took the time to learn more about their capabilities.

One person I met in these initial meetings was Gil Cordova, at that time Director of Strategic Marketing at Redbend. Gil wrote to me afterwards, confirming our common view as to what lay ahead in the future:

Redbend deals with an enabling technology and solution for OTA updating of mobile devices.

Our solution enables device manufacturers and operators to update any part of the device software including OS, middleware systems and applications.

The solution is based on our patented technology for creating delta-updates which minimize the update package size ensuring it can be cost-effectively sent and stored on the device with little bandwidth and memory consumption. In addition we enable the update to occur within the device memory constraints ensuring no cost-prohibitive memory needs to be added…

OTA updates can help answer the needs of remote software repair and fixing to the device software, as well as streamline logistics when deploying devices…

At that time, some dozen years ago, the idea that mobile phones would have more and more software in them was still relatively new – and was far from being widely accepted as a good thing. But Redbend and Symbian foresaw the consequences, as in the final paragraph of Gil’s email to me:

All the above points to the fact that if software is a new paradigm in the industry then OTA updating is a very crucial and strategic issue that must be taken into account.

OTA has, indeed, been an important issue since that time. But it’s my view that the full significance is only now becoming apparent. As security is poised to “eat the world”, efficient and reliable OTA capabilities will grow yet further in importance. It will be something that more and more companies will need to include at the heart of their own product offerings. The world will insist on it.

A few days ago, I took a closer look at recent news from HARMAN connected services – in particular at its architecture for cybersecurity. I saw a great deal that I liked:

Secure Car

  • Domain isolation – to provide a strict separation between different subsystems (e.g. parts of the overall software system on a car), with the subsystems potentially running different operating systems
  • Type-1 hypervisor – to isolate different subsystems from hardware resources, except when such access is explicitly designed
  • Driver virtualization – to allow additional peripherals (such as Wi-Fi, cameras, Bluetooth, and GPS) to be added quickly into an existing device with the same secure architecture
  • Software update systems – to enable separate remote software management for the head (dashboard) unit, telematics (black-box) unit, and numerous ECUs (engine control units) – with a 100% success record in deploying updates on more than one million vehicles
  • State of the art FIPS (Federal Information Processing Standard) encryption – applied to the entirety of the update process
  • Intrusion Detection and Prevention systems – to identify and report any malicious or erroneous network activity, and to handle the risks arising before the car or any of its components suffers any ill-effect.

I know from my own background in designing software systems that this kind of all-points-considered security cannot be tacked onto an existing system. Provision for it needs to be designed in from the beginning. That’s where Redbend’s long heritage in this space shows its value.

The full benefit of taking an architectural approach to secure software updates – as opposed to trying to fashion security on top of fundamentally insecure components – is that the same architecture is capable of re-use in different domains. It’s therefore no surprise that Redbend software management solutions are available, not only for connected cars, but also for wearable computers, connected homes, and machine-to-machine (M2M) devices.

Of course, despite all these precautions, I expect the security arms race to continue. Software will continue to have bugs, and the cybercrime industry will continue to find ingenious ways to exploit these bugs. The weakest part of any security system, indeed, is frequently the humans involved, who can fall victim to social engineering. In turn, providers of security software are seeking to improve the usability of their systems, to reduce both the likelihood and the impact of human operator error.

This race probably has many laps to run, with new surprises ahead on each lap. To keep ahead, we need allies and partners who constantly look ahead, straining to discern the forthcoming new battlegrounds, and to prepare new defences in sufficient time. But we also need to avail ourselves of the best present tools, so that our businesses have the best chance of avoiding being eaten in the meantime. Figuring out which security tools really are best in class is fast becoming a vital core competency for people in ever-growing numbers of industries.

Footnote: I was inspired to write this post after discussions with some industry colleagues involved in HARMAN’s Engineering a Connected Life program. The views and opinions expressed in this post are my own and don’t necessarily represent HARMAN’s positions, strategies or opinions.

18 May 2013

Breakthroughs with M2M: moving beyond the false starts

Filed under: collaboration, Connectivity, Internet of Things, leadership, M2M, standards — David Wood @ 10:06 am

Forecasts of machine-to-machine wireless connectivity envision 50 billion, or even one trillion, wirelessly connected devices, at various times over the next 5-10 years. However, these forecasts date back several years, and there’s a perception in some quarters that all is not well in the M2M world.

HeronTowerThese were the words that I used to set the scene for a round-table panel discussion at the beginning of this month, at the Harvey Nash offices in high-rise Heron Tower in the City of London. Participants included senior managers from Accenture Mobility, Atholl Consulting, Beecham Research, Eseye, Interskan, Machina Research, Neul, Oracle, Samsung, Telefonica Digital, U-Blox, Vodafone, and Wyless – all attending in a personal capacity. I had the privilege to chair the discussion.

My goal for the discussion was that participants would leave the meeting with clearer ideas and insights about:

  • Obstacles hindering wider adoption of M2M connectivity
  • Potential solutions to these obstacles.

The gathering was organised by Ian Gale, Senior Telecoms Consultant of Harvey Nash. The idea for the event arose in part from reflections from a previous industry round-table that I had also chaired, organised by Cambridge Wireless and Accenture. My online notes on that meeting – about the possible future of the Mobile World Congress (MWC) – included the following thoughts about M2M:

MWC showed a lot of promise for machine-to-machine (M2M) communications and for connected devices (devices that contain communications functionality but which are not phones). But more remains to be done, for this promise to reach its potential.

The GSMA Connected City gathered together a large number of individual demos, but the demos were mainly separated from each other, without there being a clear overall architecture incorporating them all.

Connected car was perhaps the field showing the greatest progress, but even there, practical questions remain – for example, should the car rely on its own connectivity, or instead rely on connectivity of smartphones brought into the car?

For MWC to retain its relevance, it needs to bring M2M and connected devices further to the forefront…

The opening statements from around the table at Harvey Nash expressed similar views about M2M not yet living up to its expected potential. Several of the participants had written reports and/or proposals about machine-to-machine connectivity as long as 10-12 years ago. It was now time, one panellist suggested, to “move beyond the false starts”.

Not one, but many opportunities

An emerging theme in the discussion was that it distorts perceptions to talk about a single, unified M2M opportunity. Headline figures for envisioned near-future numbers of “connected devices” add to the confusion, since:

  • Devices can actually connect in many different ways
  • The typical data flow can vary widely, between different industries, and different settings
  • Differences in data flow means that the applicable standards and regulations also vary widely
  • The appropriate business models vary widely too.

Particular focus on particular industry opportunities is more likely to bring tangible results than a general broad-brush approach to the entire potential space of however many billion devices might become wirelessly connected in the next 3-5 years. One panellist remarked:

Let’s not try to boil the ocean.

And as another participant put it:

A desire for big volume numbers is understandable, but isn’t helpful.

Instead, it would be more helpful to identify different metrics for different M2M opportunities. For example, these metrics would in some cases track credible cost-savings, if various M2M solutions were to be put in place.

Compelling use-cases

To progress the discussion, I asked panellists for their suggestions on compelling use-cases for M2M connectivity. Two of the most interesting answers also happened to be potentially problematic answers:

  • There are many opportunities in healthcare, if people’s physiological and medical data can be automatically communicated to monitoring software; savings include freeing up hospital beds, if patients can be reliably monitored in their own homes, as well as proactively detecting early warning signs of impending health issues
  • There are also many opportunities in automotive, with electronic systems inside modern cars generating huge amounts of data about performance, which can be monitored to identify latent problems, and to improve the algorithms that run inside on-board processors.

However, the fields of healthcare and automotive are, understandably, both heavily regulated. As appropriate for life-and-death issues, these industries are risk-averse, so progress is slow. These fields are keener to adopt technology systems that have already been well-proven, rather than carrying out bleeding-edge experimentation on their own. Happily, there are other fields which have a lighter regulatory touch:

  • Several electronics companies have plans to wirelessly connect all their consumer devices – such as cameras, TVs, printers, fridges, and dishwashers – so that users can be alerted when preventive maintenance should be scheduled, or when applicable software upgrades are available; a related example is that a printer could automatically order a new ink cartridge when ink levels are running low
  • Dustbins can be equipped with sensors that notify collection companies when they are full enough to warrant a visit to empty them, avoiding unnecessary travel costs
  • Sensors attached to roadway lighting systems can detect approaching vehicles and pedestrians, and can limit the amount of time lights are switched on to the time when there is a person or vehicle in the vicinity
  • Gas pipeline companies can install numerous sensors to monitor flow and any potential leakage
  • Tracking devices can be added to items of equipment to prevent them becoming lost inside busy buildings (such as hospitals).

Obstacles

It was time to ask the first big question:

What are the obstacles that stand in the way of the realisation of the grander M2M visions?

That question prompted a raft of interesting observations from panellists. Several of the points raised can be illustrated by a comparison with the task of selling smartphones into organisations for use by employees:

  • These devices only add business value if several different parts of the “value chain” are in good working order – not only the device itself, but also the mobile network, the business-specific applications, and connectivity for the mobile devices into the back-end data systems used by business processes in the company
  • All the different parts of the value chain need to be able to make money out of their role in this new transaction
  • To avoid being locked into products from only one supplier, the organisation will wish to see evidence of interoperability with products from different suppliers – in order words, a certain degree of standardisation is needed.

At the same time, there are issues with hardware and network performance:

  • Devices might need to be able to operate with minimal maintenance for several years, and with long-lived batteries
  • Systems need to be immune from tampering or hacking.

Companies and organisations generally need assurance, before making the investments required to adopt M2M technology, that:

  • They have a clear idea of likely ongoing costs – they don’t want to be surprised by needs for additional expenditure, system upgrades, process transformation, repeated re-training of employees, etc
  • They have a clear idea of at least minimal financial benefits arising to them.

Especially in a time of uncertain financial climate, companies are reluctant to invest money now with the promise of potential savings being realised at some future date. This results in long, slow sales cycles, in which several layers of management need to be convinced that an investment proposal makes sense. For these reasons, panellists listed the following set of obstacles facing M2M adoption:

  • The end-to-end technology story is often too complicated – resulting in what one panellist called “a disconnected value chain”
  • Lack of clarity over business model; price points often seem unattractive
  • Shortage of unambiguous examples of “quick wins” that can drum up more confidence in solutions
  • Lack of agreed standards – made worse by the fact that standardisation processes seem to move so slowly
  • Conflicts of interest among the different kinds of company involved in the extended value chain
  • Apprehension about potential breaches of security or privacy
  • The existing standards are often unsuitable for M2M use cases, having been developed, instead, for voice calls and video connectivity.

Solutions

My next question turned the discussion to a more positive direction:

Based on your understanding of the obstacles, what initiatives would you recommend, over the next 18-24 months, to accelerate the development of one or more M2M solution?

In light of the earlier observation that M2M brings “not one, but many opportunities”, it’s no surprise that panellists had divergent views on how to proceed and how to prioritise the opportunities. But there were some common thoughts:

  1. We should expect it to take a long time for complete solutions to be established, but we should be able to plan step-by-step improvements
  2. Better “evangelisation” is needed – perhaps a new term to replace “M2M”
  3. There is merit in pooling information and examples that can help people who are writing business cases for adopting M2M solutions in their organisations
  4. There is particular merit in simplifying the M2M value chain and in accelerating the definition and adoption of fit-for-purpose standards
  5. Formal standardisation review processes are obliged to seek to accommodate the conflicting needs of large numbers of different perspectives, but de facto standards can sometimes be established, a lot more quickly, by mechanisms that are more pragmatic and more focused.

To expand on some of these points:

  • One way to see incremental improvements is by finding new business models that work with existing M2M technologies. Another approach is to change the technology, but without disrupting the existing value chains. The more changes that are attempted at the same time, the harder it is to execute everything successfully
  • Rather than expecting large enterprises to lead changes, a lesson can be learned from what has happened with smartphones over the last few years, via the “consumer-led IT”; new devices appealed to individuals as consumers, and were then taken into the workforce to be inserted into business processes. One way for M2M solutions to progress to a point when enterprises would be forced to take them more seriously is if consumers adopt them first for non-work purposes
  • One key to consumer and developer experimentation is to make it easier for small groups of people to create their own M2M solutions. For example, an expansion in the reach of Embedded Java could enable wider experimentation. The Arduino open-source electronics prototyping platform can play a role here too, as can the Raspberry Pi
  • Weightless.org is an emerging standard in which several of the panellists expressed considerable interest. To quote from the Weightless website:

White space spectrum provides the scope to realise tens of billions of connected devices worldwide overcoming the traditional problems associated with current wireless standards – capacity, cost, power consumption and coverage. The forecasted demand for this connectivity simply cannot be accommodated through existing technologies and this is stifling the potential offered by the machine to machine (M2M) market. In order to reach this potential a new standard is required – and that standard is called Weightless.

Grounds for optimism

As the discussion continued, panellists took the opportunity to highlight areas where they, individually, saw prospects for more rapid progress with M2M solutions:

  • The financial transactions industry is one in which margins are still high; these margins should mean that there is greater possibility for creative experimentation with the adoption of new M2M business models, in areas such as reliable automated authentication for mobile payments
  • The unsustainability of current transport systems, and pressures for greater adoption of new cars with hybrid or purely electric power systems, both provide opportunities to include M2M technology in so-called “intelligent systems”
  • Rapid progress in the adoption of so-called “smart city” technology by cities such as Singapore might provide showcase examples to spur adoption elsewhere in the world, and in new industry areas
  • Progress by weightless.org, which addresses particular M2M use cases, might also serve as a catalyst and inspiration for faster progress in other standards processes.

Some take-aways

To wind up the formal part of our discussion, I asked panellists if they could share any new thoughts that had occurred to them in the course of the preceding 120 minutes of round-table discussion. Here’s some of what I heard:

  • It’s like the early days of the Internet, in which no-one had a really good idea of what would happen next, but where there are clearly plenty of big opportunities ahead
  • There is no “one correct answer”
  • Systems like Arduino will allow young developers to flex their muscles and, no doubt, make lots of mistakes; but a combination of youthful vigour and industry experience (such as represented by the many “grey hairs” around the table) provide good reason for hope
  • We need a better message to evangelise with; “50 billion connected devices” isn’t sufficient
  • Progress will result from people carefully assessing the opportunities and then being bold
  • Progress in this space will involve some “David” entities taking the courage to square up to some of the “Goliaths” who currently have vested interests in the existing technology systems
  • Speeding up time-to-market will require companies to take charge of the entire value chain
  • Enabling consumerisation is key
  • We have a powerful obligation to make the whole solution stack simpler; that was already clear before today, but the discussion has amply reinforced this conclusion.

Next steps

A number of forthcoming open industry events are continuing the public discussion of M2M opportunities.

M2M World

With thanks to…

I’d like to close by expressing my thanks to the hosts of the event, Harvey Nash, and to the panellists who took the time to attend the meeting and freely share their views:

18 March 2013

The future of the Mobile World Congress

Filed under: Accenture, Cambridge, Connectivity, innovation, Internet of Things, M2M, MWC — David Wood @ 3:37 am

How should the Mobile World Congress evolve? What does the future hold for this event?

MWC logoMWC (the Mobile World Congress) currently has good claims to be the world’s leading show for the mobile industry. From 25-28 February, 72 thousand attendees from over 200 countries made their way around eight huge halls where over 1,700 companies were showcasing their products or services. The Barcelona exhibition halls were heaving and jostling.

Tony Poulos, Market Strategist for TM Forum, caught much of the mood of the event in his review article, “Billions in big business as Barcelona beats blues”. Here’s an excerpt:

In one place for four days each year you can see, meet and hear almost every key player in the GSM mobile world. And there lies its secret. The glitz, the ritzy exhibits, the partially clad promo girls, the gimmicks, the giveaways are all inconsequential when you get down to the business of doing business. No longer do people turn up at events like MWC just to attend the conference sessions, walk the stands or attend the parties, they all come here to network in person and do business.

For suppliers, all their customers and prospects are in one place for one week. No need to send sales teams around the globe to meet with them, they come to you. And not just the managers and directors, there are more telco C-levels in Barcelona for MWC than are left behind in the office. For suppliers and operators alike, if you are not seen at MWC you are either out of business or out of a job.

Forget virtual social networking, this is good old-fashioned, physical networking at its best. Most meetings are arranged ahead of time and stands are changing slowly from gaudy temples pulling in passers-by to sophisticated business environments complete with comfortable meeting rooms, lounges, bars, espresso machines and delicacies including Swiss chocolates, Portuguese egg tarts, French pastries and wines from every corner of the globe…

But at least some of the 72,000 MWC attendees found the experience underwhelming. Kevin Coleman, CEO of Alliantus, offered a damning assessment at the end of the show:

I am wondering if I am the boy who shouts – “but the emperor is wearing no clothes” – or the masked magician about to reveal the secrets of the magic trick.

Here it is. “Most of you at Mobile World Congress have wasted your money.”

Yes, I have just returned from the MWC where I have seen this insanity with my own eyes…

That’s quite a discrepancy in opinion. Billions in business, or Insanity?

Or to rephrase the question in terms suggested by my Accenture colleague Rhian Pamphilon, Fiesta or Siesta?

To explore that question, Accenture sponsored a Cambridge Wireless event on Tuesday last week at the Møller Centre at Churchill College in Cambridge. The idea was to bring together a panel of mobile industry experts who would be prepared to share forthright but informed opinions on the highlights and lowlights of this year’s MWC.

Panellists

The event was entitled “Mobile World Congress: Fiesta or Siesta?!”. The panellists who kindly agreed to take part were:

  • Paul Ceely, Head of Network Strategy at EE
  • Raj Gawera, VP Marketing at Samsung Cambridge Mobile Solutions
  • Dr Tony Milbourn, VP Strategy at u-blox AG
  • Geoff Stead, Senior Director, Mobile Learning at Qualcomm
  • Professor William Webb, CTO at Neul
  • Dr. Richard Windsor, Founder of Radio Free Mobile.

The meeting was structured around three questions:

  1. The announcements at MWC that people judged to be the most significant – the news stories with the greatest implications
  2. The announcements at MWC that people judged to be the most underwhelming – the news stories with the least real content
  3. The announcements people might have expected at MWC but which failed to materialise – speaking volumes by their silence.

In short, what were the candidates for what we termed the Fiesta, the Siesta, and the Niesta of the event? Which trends should be picked out as the most exciting, the most snooze-worthy, and as sleeping giants liable to burst forth into new spurts of activity? And along the way, what future could we discern, not just for individual mobile trends, but for the MWC event itself?

I had the pleasure to chair the discussion. All panellists were speaking on their own behalf, rather than necessarily representing the corporate viewpoints of their companies. That helped to encourage a candid exchange of views. The meeting also found time to hear suggestions from the audience – which numbered around 100 members of the extended Cambridge Wireless community. Finally, there was a lively networking period, in which many of the audience good-humouredly button-holed me with additional views.

We were far from reaching any unanimous conclusion. Items that were picked as “Fiesta” by one panellist sometimes featured instead on the “Siesta” list of another. But I list below some key perceptions that commanded reasonable assent on the evening.

Machine to machine, connected devices, and wearable computers

MWC showed a lot of promise for machine-to-machine (M2M) communications and for connected devices (devices that contain communications functionality but which are not phones). But more remains to be done, for this promise to reach its potential.

The GSMA Connected City gathered together a large number of individual demos, but the demos were mainly separated from each other, without there being a clear overall architecture incorporating them all.

Connected car was perhaps the field showing the greatest progress, but even there, practical questions remain – for example, should the car rely on its own connectivity, or instead rely on connectivity of smartphones brought into the car?

For MWC to retain its relevance, it needs to bring M2M and connected devices further to the forefront.

Quite likely, wearable computers will be showing greater prominence by this time next year – whether via head-mounted displays (such as Google Glass) or via the smart watches allegedly under development at several leading companies.

NFC – Near Field Communications

No one spoke up with any special excitement about NFC. Words used about it were “boring” and “complicated”.

Handset evolution

The trend towards larger screen sizes was evident. This seems to be driven by the industry as much as by users, since larger screens encourage greater amounts of data usage.

On the other hand, flexible screens, which have long been anticipated, and which might prompt significant innovation in device form factors, showed little presence at the show. This is an area to watch closely.

Perhaps the most innovative device on show was the dual display Yota Phone – with a standard LCD on one side, and an eInk display on the other. As can be seen in this video from Ben Wood of CCS Insight, the eInk display remains active even if the device is switched off or runs out of battery.

Two other devices received special mention:

  • The Nokia Lumia 520, because of its low pricepoint
  • The Lenovo K900, because of what it showed about the capability of Intel’s mobile architecture.

Mobile operating systems

Panellists had dim views on some of the Android devices they saw. Some of these devices showed very little differentiation from each other. Indeed, some “formerly innovative” handset manufacturers seem to have lost their direction altogether.

Views were mixed on the likely impact of Mozilla’s Firefox OS. Is the user experience going to be sufficiently compelling for phones based on this OS to gain significant market traction? It seems too early to tell.

Panellists were more open to the idea that the marketplace could tolerate a considerable number of different mobile operating systems. Gone are  the days when CEOs of network operators would call for the industry to agree on just three platforms. The vast numbers of smartphones expected over the next few years (with one billion likely to be sold in 2013) mean there is room for quite a few second-tier platforms behind the market leaders iOS and Android.

Semiconductor suppliers

If the mobile operating system has two strong leaders, the choice of leading semiconductor supplier is even more limited. One company stands far out from the crowd: Qualcomm. In neither case is the rest of the industry happy with the small number of leading choices available.

For this reason, the recently introduced Tegra 4i processor from Nvidia was seen as potentially highly significant. This incorporates an LTE modem.

Centre of gravity of innovation

In past years, Europe could hold its head high as being at the vanguard of mobile innovation. Recent years have seen more innovation from America, e.g. from Silicon Valley. MWC this year also saw a lot of innovation from the Far East – especially Korea and China. Some audience members suggested they would be more interested in attending an MWC located in the Far East than in Barcelona.

Could the decline in Europe’s position be linked to regulatory framework issues? It had been striking to listen to the pleas during keynotes from CEOs of European network operators, requesting more understanding from governments and regulators. Perhaps some consolidation needs to take place, to address the fragmentation among different network operators. This view was supported by the observation that a lot of the attempted differentiation between different operators – for example, in the vertical industry solutions they offer – fail to achieve any meaningful distinctions.

State of maturity of the industry

In one way, the lack of tremendous excitement at MWC this year indicates the status of the mobile industry as being relatively mature. This is in line with the observation that there were “a lot of suits” at the event. Arguably, the industry is ripe for another round of major disruption – similar to that triggered by Apple’s introduction of the iPhone.

Unsurprisingly, given the setting of the Fiesta or Siesta meeting, many in the audience hold the view that “the next big mobile innovation” could well involve companies with strong footholds in Cambridge.

Moller Centre

Footnote: Everything will be connected

Some of the same themes from the Fiesta or Siesta discussion will doubtless re-appear in “The 5th Future of Wireless International Conference” being run by Cambridge Wireless at the same venue, the Møller Centre, on 1st and 2nd of July this year. Registration is already open. To quote from the event website:

Everything Will Be Connected (Did you really say 50 billion devices?)

Staggeringly, just 30 years since the launch of digital cellular, over 6 billion people now have a mobile phone. Yet we may be on the threshold of a far bigger global shift in humanity’s use and application of wireless and communications. It’s now possible to connect large numbers of physical objects to the Internet and Cloud and give each of them an online digital representation. What really happens when every ‘thing’ is connected to the Cloud and by implication to everything else; when computers know where everything is and can enhance our perception and understanding of our surroundings? How will we interact with this augmented physical world in the future, and what impact will this have on services, infrastructure and devices? More profoundly, how might this change our society, business and personal lives?

In 2013, The Future of Wireless International Conference explores strategic questions about this “Internet of Things”. How transformational could it be and how do we distinguish reality from hyperbole? What about the societal, business and technical challenges involved in moving to a future world where everyday objects are connected and autonomous? What are the benefits and pitfalls – will this be utopia or dystopia? What is the likely impact on your business and what new opportunities will this create? Is your business strategy correct, are you too early, or do you risk being too late? Will this change your business, your life? – almost certainly. Come to hear informed analysis, gain insight, and establish new business connections at this un-missable event.

The agenda for this conference is already well-developed – with a large number of highlights all the way through. I’ll restrict myself to mentioning just two of them. The opening session is described as an executive briefing “What is the Internet Of Things and Why Should I Care?”, and features a keynote “A Vision of the Connected World” by Prof Christopher M. Bishop, FREng, FRSE, Distinguished Scientist, Microsoft Research. The closing session is a debate on the motion “This house believes that mobile network operators will not be winners in the Internet of Things”, between

25 March 2012

Smartphone technology, super-convergence, and the great inflection of medicine

Filed under: books, Connected Health, converged medicine, healthcare, Internet of Things, medicine — David Wood @ 10:07 pm

You are positioned to reboot the future of medicine…”

That’s the rallying cry that rings out from Eric Topol’s marvellous recent book “The Creative Destruction of Medicine”.  The word “Destruction” is meant in the sense elaborated by Austrian-Hungarian economist Joseph Schumpeter.  To quote from Investopedia:

Creative destruction occurs when something new kills something older. A great example of this is personal computers. The industry, led by Microsoft and Intel, destroyed many mainframe computer companies, but in doing so, entrepreneurs created one of the most important inventions of the century.

Topol believes that a similar transformation is underway in medicine.  His book describes at some length what he calls a “super-convergence” of different technological transformations:

  • Genomics, which increasingly indicates connections between individuals’ DNA sequences and their physiological responses to specific drugs and environmental conditions
  • Numerous small sensors – wearable (within clothing) or embeddable (within the body) – that can continuously gather key physiological data, such as blood glucose level, heart rhythm, and blood pressure, and transmit that data wirelessly
  • Improvements to imaging and scanning, that provide clearer information as to what is happening throughout the body (including the brain)
  • Enormous computing power that can manipulate vast amounts of data and spot patterns in it
  • Near ubiquitous smartphones, which can aggregate data from sensors, host all kinds of applications related to health and wellness, and provide early warnings on the need for closer attention
  • 3D manufacturing and synthetic biology, that can create compounds of growing use in medical investigation and bodily repair
  • The adoption of electronic medical records, that allow healthcare professionals to be much more aware of medical history of their patients, reducing the number of problems arising from unexpected interactions between different treatments
  • The emergence of next generation social networks binding together patients with shared interest in particular diseases, allowing crowd-sourcing of new insight about medical conditions
  • Enhanced communications facilities, that enable medical professionals to provide advice and even conduct operations from far-distant locations
  • Improved, free medical training facilities, such as the short videos provided by the Khan Academy.

Topol has an impressive track record as a leading medical practitioner, and gives every sign that he knows what he is talking about.  Importantly, he maintains a critical, skeptical perspective.  He gives plenty of examples of where technology has gone wrong in medicine, as well as when it has done well.  His observation of the application of accelerating technology to medicine is far from a utopia.  There are two sorts of problematic factors: technology factors (including the complexity of the underlying science), and non-technology factors.

First, the technology factors.  The ways that individuals react to different medical treatment vary considerably: a drug that saves one life can have terrible side effects in other patients.  What’s more, diseases that were formerly conceived as single entities now appear to be multiple in nature.  However, the move from “population medicine” to “individual medicine”, enabled by advances in genomics and by powerful data analysis, offers a great deal of hope.  For one example of note, see the Wall Street Journal article, “Major Shift in War on Cancer: Drug Studies Focus on Genes of Individual Patients“.  The core principle is that of ever improving digital analysis of data describing individual people – something that Topol calls “digital high definition of humans” leading to “hyperpersonalisation of healthcare… fulfilling the dream of true prevention of diseases”.

But the non-technology factors are just as significant.  Instead of the complexity of the underlying science, this refers to the structure of the medical industry.  Topol has harsh words here, describing the medical establishment as “ultra-conservative”, “ossified”, and “sclerotic” – existing in a “cocoon” which has tended to isolate it from the advances in information technology that have transformed so many other industries.  Topol calls for “an end of the medical priesthood… the end of an era of ‘doctor knows best'”.  Associations of medical professionals who seek to block patients from seeing their own medical data (e.g. a detailed analysis of their personal DNA) are akin, Topol says, to the medieval priests who fought against the introduction of printing and who tried to prevent church congregations from reading the bible in their own hands.

Given such criticisms, it’s perhaps surprising to read the wide range of positive endorsements at the start of the book, from eminent leaders of the medical industry.  This includes:

  • The global president of R&D for Sanofi
  • The professor of genetics from Harvard Business School
  • The chairman and CEO of Medtronic
  • The professor and vice-chair of surgery from NY Presbyterian/Columbia University
  • The chief medical officer from Philips Healthcare
  • The executive vice president and chief of medical affairs from United Health Group
  • The president of the Salk Institute for Biological Studies

and many others.  And for a growing list of reviews of the book, including from many people deeply embedded in the medical industry, see this compendium on the 33 Charts blog.  What’s happening here is that Topol is drawing attention to structural issues inside the medical profession, which many other people recognise too.  This includes risk aversion, long training cycles that place little emphasis on information technology, funding models that emphasise treatment rather than prevention, tests that are unnecessary and dangerous, and lengthy regulatory processes.

If the problem is structural, within the medical industry, the fix is within the hands of patients.  As per the quote I started with,

You are positioned to reboot the future of medicine…”

Here’s the longer version of that quote:

With the personal montage of your DNA, your cell phone, your social network – aggregated with your lifelong health information and physiological and anatomic data – you are positioned to reboot the future of medicine.

Topol advocates patients take advantage of the tremendous computational power that is put into their hands by smartphones, running healthcare applications, connected to wireless sensors, and plumbed into increasingly knowledgeable social networks that have a focus on medical matters – sites such as PatientsLikeMe, CureTogether, and many others.

There’s an important precedent.  This is the way business professionals are taking their own favourite smartphones and/or tablet computers into their workplaces, and are demanding that they can access enterprise systems with these devices.  This trend – “bring your own device” (“BYOD”) – is itself a subset of something known as “the consumerisation of enterprise technology”.  People buy particular smartphones and tablets on account of their compelling ease of use, stunning graphics, accessible multimedia, and rich suite of value-add applications covering all sorts of functionality.  They enjoy using these devices – and expect to be able to be use them for work purposes too, instead of what they perceive as clunky and sluggish devices provided via official business channels.  IT departments in businesses all around the globe are having to scramble to respond.  Once upon a time, they would have laid down the law, “the only devices allowed to be used for business are ones we approve and we provide”.  But since the people bringing in their own personal devices are often among the most senior officials in the company, this response is no longer acceptable.

Just as people are bringing their favourite smartphones from their home life into their business life, they should increasingly be willing to bring them into the context of their medical treatment – especially when these devices can be coupled to data sensors, wellness applications, and healthcare social networks.  Just as we use our mobile devices to check our email, or the sports news, we’ll be using these devices to check our latest physiological data and health status.  This behaviour, in turn, will be driven by increasing awareness of what’s available.  And Topol is on a mission to increase that awareness.  Hence his frequent speaking engagements, including his keynote session at the December 2011 mHealth Summit in Washington DC, when I first became aware of him.  (You can find a video of this presentation here.)  And hence his authorship of this book, to boost public understanding of the impending inflection point in medicine.  The more we all understand what’s available and what’s possible, the more we’ll all get involved in this seismic patient-led transformation.

Footnote: Topol’s book is generally easy to read, but contains quite a lot of medical detail in places.  Another book which covers similar ground, in a way that may be more accessible to people whose background is in mobile technology rather than medicine, is “The Decision Tree: How to make better choices and take control of your health”, by executive editor of Wired magazine, Thomas Goetz.  Both Topol and Goetz write well, but Goetz has a particular fluency, and tells lots of fascinating stories.  To give you a flavour of the style, you can read chapter one free online.  Both books emphasise the importance of allowing patients access to their own healthcare data, the emergence of smart online networks that generate new insight about medical issues, and the tremendous potential for smartphone technology to transform healthcare.  I say “Amen” to all that.

9 October 2010

On smartphones, superphones, and subphones

What comes next after smartphones?

There’s big league money in smartphones.  In 2009, around 173 million smartphones were sold worldwide.  IDC predicts this figure will jump to nearly 270 million in 2010.  According to Informa, that represents about 27% of the total mobile phone unit sales in 2010.  But as Informa also point out, it represents around 55% of total market value (because of their high average selling price), and a whopping 64% of the mobile phone market’s profits.

As well as big money from sales of smartphones themselves, there’s big money in sales of applications for smartphones.  A recent report from Research2Guidance evaluates the global smartphone application market as being worth $2.2 (£1.4) billion during the first half of 2010, already surpassing the total value of $1.7 (£1.1) billion for all 12 months of 2009.

  • What’s next? If there’s so much money in the rapidly evolving smartphone market, where will the underlying wave of associated technological and commercial innovation strike next?  Answer that question correctly, and you might have a chance to benefit big time.

Three answers deserve attention.

1. More smartphones

The first answer is that the smartphone market is poised to become larger and larger.  The current spurt of growth is going to continue.  More and more people are going to be using smartphones, and more and more people will be downloading and using more and more applications.  This growth will be driven by:

  • Decreasing costs of smartphone devices
  • Improved network connectivity
  • An ever-wider range of different applications, tailored to individual needs of individual mobile consumers
  • Improved quality of applications, networks, and devices – driven by fierce competition
  • Burgeoning word-of-mouth recommendations, as people tell each other about compelling mobile services that they come across.

Perhaps one day soon, more than 50% of all mobile phones will be built using smartphone technology.

2. Superphones

The second answer is that smartphones are going to become smarter and more capable.  The improvements will be so striking that the phrase “smartphone” won’t do them justice.  Google used a new term, “superphone”, when it introduced the Nexus One device:

Nexus One is an exemplar of what’s possible on mobile devices through Android — when cool apps meet a fast, bright and connected computer that fits in your pocket. The Nexus One belongs in the emerging class of devices which we call “superphones”. It’s the first in what we expect to be a series of products which we will bring to market with our operator and hardware partners and sell through our online store.

Blogger Stasys Bielinis of UnwiredView takes up the analysis in his recent thought-provoking article, “Nokia’s doing OK in smartphones. It’s superphones where Apple and Google Android are winning”:

Smartphones and superphones share some common characteristics – always on connectivity, ability to make phone calls and send SMS/MMS, access the internet and install third party software apps.  But the ways these devices are used are very different – as different as iPads/tablets are different from laptops/netbooks.

The main function of a smartphone – is a mobile phone.  You use it primarily to do voice calls and send/receive short text messages via SMS/MMS.  Yes, your smartphone can do a lot more things – take pictures, browse the Web, play music, stream audio/video from the net, make use of various third-party apps.  But you use those additional functions only when you really need it, or there’s no better option than a device in your pocket, or when there’s some particularly interesting mobile service/app that requires your attention – e.g. Facebook, Twitter, Foursquare, or other status updaters.   But they are secondary functions for your smartphone. And, due to the design limitations – small displays, crammed keypads/keyboards, button navigation, etc – using those additional “smart” capabilities is a chore…

Superphones, on the other hand, are not phones anymore. They are truly small mobile computers in your pocket, with phone/texting as just another app among many. The user experience – big displays, (multi) touch, high quality browsers, etc – is optimized to transfer big screen PC interaction models to the limitations of mobile device that can fit in your pocket. While the overall experience doing various things on your superphone is a bit worse than doing those same things on your laptop, it’s not much worse, and is actually good enough for the extensive use on the go…

There’s scope to quibble with the details of this distinction.  But there’s merit in the claim that the newer smartphones – whatever we call them – typically manifest a lot more of the capabilities of the computing technology that’s embedded into them.  The result is:

  • More powerful applications
  • Delivering more useful functionality.

3. Subphones

The first answer, above, is that smartphones are going to become significantly more numerous.  The second answer is that smartphones are going to become significantly more powerful.  I believe both these answers.  These answers are both easy to understand.  But there’s a third answer, which is just as true  as the first two – and perhaps even more significant.

Smartphone technology is going to become more and more widely used inside numerous types of devices that don’t look like smartphones.

These devices aren’t just larger than smartphones (like superphones).  They are different from smartphones, in all kinds of way.

If the motto “smartphones for all” drove a great deal of the development of the mobile industry during the decade 2000-2010, a new motto will become increasingly important in the coming decade: “Smartphone technology everywhere”.  This describes a new wave of embedded software:

  • Traditional embedded software is when computing technology is used inside devices that do not look like computers;
  • The new wave of embedded software is when smartphone technology is used inside devices that do not look like smartphones.

For want of a better term, we can call these devices “subphones”: the underlying phone functionality is submerged (or embedded).

Smartphone technology everywhere

The phrase “smartphone technology” is shorthand for technology (both hardware and software) whose improvement was driven by the booming commercial opportunities of smartphones.  Market pressures led to decreased prices, improved quality, and new functionality.  Here are some examples:

  • Wireless communications chips – and the associated software
  • Software that can roam transparently over different kinds of wireless network
  • Large-scale data storage and information management – both on a device, and on the cloud
  • Appealing UIs on small, attractive, hi-res graphics displays
  • Streaming mobile multimedia
  • Device personalisation and customisation
  • Downloadable and installable applications, that add real value to the base device
  • Access to the Internet while mobile, in ways that make sense on small devices
  • High performance on comparatively low-powered hardware with long battery life
  • Numerous sensors, including location, direction, motion, and vision.

The resulting improvements allow these individual components to be re-purposed for different “subphone” devices, such as:

  • Tablets and slates
  • Connected consumer electronics (such as cameras and personal navigation devices)
  • Smart clothing – sometimes called “wearable computers” – or a “personal area network”
  • Smart cars – including advanced in-vehicle infotainment
  • Smart robots – with benefits in both industrial automation and for toys
  • Smart meters and smart homes
  • Smart digital signs, that alter their display depending on who is looking at them
  • Mobile medical equipment – including ever smaller, ever smarter “micro-bots”.

By some estimates, the number of such subphones will reach into the hundreds of billions (and even beyond) within just a few short years.  As IBM have forecast,

Soon there will be 1 trillion connected devices in the world. A smarter planet will require a smarter communications infrastructure. When things communicate, systems connect. And when systems connect, the world gets smarter.

This will be an era where M2M (machine to machine) wireless communications far exceed communications directly involving humans.  We’ll be living, not just in a sea of smart devices, but inside an “Internet of Things”.

Barriers to benefits

Smartphone technologies bring many opportunities – but these opportunities are, themselves, embedded in a network of risks and issues.  Many great mobile phone companies failed to survive the transition to smartphones.  In turn, some great smartphone companies are struggling to survive the transition to superphones.  It’s the same with subphones – they’re harder than they look.  They’re going to need new mindsets to fully capitalise on them.

To make successful products via disruptive new combinations of technology typically requires more than raw technological expertise.  A broad range of other expertise is needed too:

  • Business model innovation – to attract new companies to play new roles (often as “complementors”) in a novel setup
  • Ecosystem management – to motivate disparate developers to work together constructively
  • System integration and optimisation – so that the component technologies join together into a stable, robust, useable whole
  • User experience design – to attract and retain users to new usage patterns
  • Product differentiation – to devise and deploy product variants into nearby niches
  • Agility – to respond rapidly to user feedback and marketplace learnings.

The advance of software renders some problems simpler than before.  Next generation tools automate a great deal of what was previously complex and daunting.  However, as software is joined together in novel ways with technologies from different fields, unexpected new problems spring up, often at new boundaries.  For example, the different kinds of subphones are likely to have unexpected interactions with each other, resulting in rough edges with social and business aspects as much as technological ones.

So whilst there are many fascinating opportunities in the world beyond smartphones, these opportunities deserve to be approached with care.  Choose your partners and supporters wisely, as you contemplate these opportunities!

Footnote 1: For some vivid graphics illustrating the point that companies who excel in one era of mobile technology (eg traditional mobile phones) sometimes fail to retain their profit leadership position in a subsequent era (eg superphones), see this analysis by Asymco.

Footnote 2: On the “superphone” terminology:

It wasn’t Google that invented the term “superphone”.  Nokia’s N95 was the first phone to be widely called a superphone – from around 2006.  See eg here and here.

In my own past life, I toyed from time to time with the phrase “super smart phone” – eg in my keynote address to the 2008 Mobile 2.0 event in San Francisco.

Footnote 3: I look forward to discussing some of these topics (and much more besides) with industry colleagues, both old and new, at a couple of forthcoming conferences which I’ll be attending:

  • SEE10 – the Symbian Expo and Exchange – in Amsterdam, Nov 9-10
  • MeeGo Conference – in Dublin, Nov 13-15.

In each case, I’ll be part of the Accenture Embedded Software Services presence.

21 April 2010

Designing the Internet of Things

Filed under: Internet of Things, mashup* event, Mobile Monday — David Wood @ 2:10 pm
  • Computers; smartphones; … smart things.
  • The Internet; the mobile Internet; … the Internet of Things.

These two epic trends tell aspects of the same grand story.  First, computing power is becoming more widespread, more affordable, more compact, and more miniature.  Second, networked intelligence is becoming more widespread, more affordable, more effective, and more informed.

As a result, we can look forward to a time, in just a few years, where each of us owns more than a dozen different devices that communicate with each other, wirelessly and transparently.  That will take the number of wireless modems in use in the world to upwards of 50 billion.

Going further, there are forecasts of no fewer than one trillion wirelessly connected “things”, where this time the connection will often involve simpler connectivity such as RFID.  As reported recently in Wireless Week:

There will be 1 trillion devices connected to the Internet by 2013, said Cisco Chief Technology Officer Padma Warrior during her Wednesday keynote address at CTIA.

Warrior argued the boom in connected devices, applications and mobile broadband would change not only the wireless industry but society in general.

“The Internet is no longer just an information superhighway, it’s a platform,” Warrior said, citing the increased adoption of M2M technologies and the exponential growth of apps…

To prove her point, Warrior moved through a series of technology demonstrations with a Cisco colleague that detailed what it would be like to interact with next-generation mobile technology.

The pair showed off augmented reality in a subway system; location-based advertising and mobile coupons; and a mobile telepresence app.

The next big revolution that will happen is the Internet of things,” Warrior said…

Evidently, the ever lower costs and increased quality of computing and connectivity are opening all kinds of new opportunities.  It’s easy to speculate on possibilities:

  • Distributed arrays of sensors that can more reliably – and more quickly – highlight the changing concentrations of volcanic ash;
  • Luggage tags that know (and can report) where your luggage is;
  • Air conditioning units and heating units that can coordinate to act in concert, rather than independently;
  • A handheld toothbrush that can let you know if you’re not putting enough effort into cleaning the innersides of your lower right molars;
  • Smart sticking plasters that detect microscopic changes in skin condition or blood flow;
  • A monitor that can detect if you are too distracted (or too dozy) to drive safely.  (Even better, put the driving intelligence into the car itself, rather than rely on human drivers.)
  • Surveillance cameras that can analyse what they are filming, being alert for security abnormalities;
  • Audio recording devices that can understand what they are hearing;
  • Smart glasses that can interpret what you’re looking at;
  • Smart digital signs that change their display depending on who’s looking at them;
  • And all of these devices connected together…

Does this sound good to you? We can debate some of the points, but overall, it’s clear this grand technology trend has great power to improve health, education, transport, the environment, and human experience generally.

So why isn’t it happening faster? Why is it that, as an industry colleague said to me recently, this whole field is in a state of chaos?

It’s chaotic, because we don’t know what’s happening next, nor how fast it’s happening.  The use cases developers identify as important at the start of a project often turn out to be less significant than ones that turn up, unexpected, part way through the project.  (That’s not necessarily a problem.  It is, of course, a large opportunity.)

In short, although lots of the underlying technology is mature, the emerging industry of the “Internet of Things” is still far from mature.  The roadmaps of product development remain tentative and sketchy.  Speedy progress will depend on:

  • A few underlying technology issues – such as network interoperability, smart distribution of tasks across multiple processors, power management, power harvesting, and security;
  • Some pressing business model issues – since not all existing players are excited by the prospects of cost-savings which would in turn reduce their profits from existing products;
  • Some ecosystem management issues – to solve “chicken and egg” scenarios where multiple parts of a compound solution all need to be in place, before the full benefits can be realised;
  • Some project development agility issues – to avoid wasted investment in cases where project goals change part way through, due to the uncertain territory being navigated;
  • Some significant design issues – to ensure that the resulting products can be widely accepted and embraced by “normal people” (as opposed just to early adopter technology enthusiasts).

These are some of the themes that I will seek to explore as one of the speakers at the mashup* event in London on Tuesday 4th May, entitled “Internet of things: Rise of the machines“.  In addition to myself, the other announced speakers are:

Many thanks to the mashup* team for organising this get-together!  I hope to see you there.

Footnote: The ReadWriteWeb publish a useful series of articles about the Internet of Things.  At time of writing, the most recent article in this series is “Internet of Things Can Make Us Human Again“, which highlights the ideas and work of David Orban, Founder of WideTag.  Here’s a brief extract:

Orban’s dream is that thousands of years of human subservience to machines will end because we will teach our machines how to not only take care of themselves, but how to take care of us as well…

These new machine networks will be so redundant and reliable that we will be freed from most of our machine-operating duties. We will get to be human again…

Extending these ideas, David recently spoke on the theme “Free to be human” at Mobile Monday Amsterdam.  It’s a great introduction to many Internet of Things ideas:

By good fortune, David will be in London this weekend, since he’s one of the speakers at the Humanity+ UK2010 event.  He’ll be addressing the Internet of Things once again, along with some thoughts on the progress of the Singularity University.  In my experience, he’s a fascinating person to talk with.

25 January 2010

Towards 50 billion connected mobile devices?

Filed under: Connectivity, Internet of Things, M2M — David Wood @ 2:33 am

Some time around December 2008, the number of mobile phone connections worldwide reached – according to an estimate by Informa Telecoms & Media – the staggering total of 4 billion.

The growth of mobile phone usage has been meteoric.  Quoting Wireless Intelligence as its source, an article in Gizmag tracks the rise:

  • The first commercial citywide cellular network was launched in Japan by NTT in 1979
  • The milestone of 1 billion mobile phone connections was reached in 2002
  • The 2 billion mobile phone connections milestone was reached in 2005
  • The 3 billion mobile phone connections milestone was reached in 2007
  • The 4 billion mobile phone connections milestone was reached in February 2009.

How much further can this trend continue?

One line of reasoning says that this growth spurt is bound to slow down, since there are only 6.8 billion people alive on the planet.

However, another line of reasoning points out that:

  • People often have more than one mobile phone connection
  • Mobile phone connections can be assigned to items of equipment (to “machines”) rather than directly to humans.

In this second line of reasoning, there’s no particular reason to expect any imminent cap on the number of mobile phone connections.  For example, senior representatives from Ericsson have on several occasions talked of the possibility of 50 billion connected devices by 2020:

In this line of thinking, the following types of machinery would all benefit from having a wireless network connection:

  • Cars;
  • Energy meters (such as electricity meters);
  • Units used in HVAC (Heat, Ventilating, and Air Conditioning);
  • Mobile point-of-sales terminals;
  • Vending machines;
  • Security alarms;
  • Data storage devices (including electronic book readers);
  • Devices used in navigation;
  • Devices used in healthcare.

These devices are, in general, not mobile phones in any traditional sense.  They are not used for voice communication.  However, they have requirements to share data about their state – including allowing remote access to assess how well they are operating.  The phrase “embedded connectivity” is used to describe them.  Another phrase in common use is “M2M” – meaning “machine to machine”.  As interest in tracking energy usage and resource usage grows, so will the requirement for remote access to meters and monitors of all sorts.  An article in Social Machinery reports:

M2M devices will account for a significant share of new mobile network connections on developed markets in the coming years. The main reason is the high penetration of mobile subscriptions and the proliferation of devices. Today, Europeans have more than 14 devices at home waiting to be connected according to Ericsson consumer research.

If by 2020 there are some 3.5 billion people worldwide with as many devices waiting to be connected as today’s average European, we quickly reach a figure of 50 billion devices with embedded connectivity.

Or do we?

This Tuesday and Wednesday, I’ll be taking part in the Informa “Embedded Connectivity” conference in London.  Informa have assembled a very interesting mix of speakers and panellists, covering all aspects of the emerging embedded connectivity industry.  As well as listening carefully to the presentations and (hopefully) asking some pertinent questions from the floor, I’ll be:

  • Chairing the Day One panel “Building the Business Models for a Connected Future”;
  • Speaking on the Day Two panel “The Future of Connectivity”.

One thing I’ll be keen to do is to understand the context of various predictions about the size of this market.  Indeed, although the figure of 50 billion connected devices is already astonishingly large, it’s by no means the largest figure that has been banded around:

For example, Amdocs recently spoke in a press release about:

A not-too-distant future, when more than one trillion devices will be connected to the network, an industry phenomenon the company calls “Tera-play.”

(Here, Tera is one thousand times Giga, namely shorthand for a trillion.)

Similarly, IBM regularly reference a prediction by IDC:

By 2011, IDC estimates, there will be one trillion Internet-connected devices, up from 500 million in 2006.

IBM repeat this figure in their advance publicity for next month’s Mobile World Congress in Barcelona:

Soon there will be 1 trillion connected devices in the world. A smarter planet will require a smarter communications infrastructure. When things communicate, systems connect. And when systems connect, the world gets smarter. Together let’s build smarter communications.

How do we make sense of these radically different predictions (1 trillion vs. 50 billion)?

Is it just a matter of time?

More likely, it’s a matter of different kinds of connectivity.  In all, there are probably at least five levels of connectivity.

Level 1 is the rich and varied connectivity of a regular  mobile phone, driven by a human user.  This can have attractive levels of ARPU (average revenue charges per user) for network operators.

Level 2 involves many of the devices that I mentioned earlier.  These devices will contain a cellular modem, over which data transfer takes place.  This connectivity falls under the label “machine to machine” rather than directly involving a human.  It’s generally thought that the ARPU for M2M will be less than the ARPU for smartphones.  For example, there might only be a data transfer of several kilobytes, every week or so.  Network operators will be interested in these devices because of their numbers, rather than their high ARPUs.

I’ll skip Level 3 for now, and come back to it afterwards.

Level 4 is where we reach the figure of one trillion connected devices.  However, these devices do not contain a cellular modem.  Nor, in most cases, do they initiate complex data transfers.  Instead, they contain an RFID (Radio Frequency IDentification) tag.  These tags are significantly cheaper than cellular modems.  They can be used to identify animals, items of luggage, retail goods, and so on.  Other sensors keep track of whether items with particular RFID tags are passing nearby.  The local data flow between sensor and RFID tag will not involve any cellular network.

Level 5 takes the idea of “everything connected” one stage further, to the so-called semantic internet, in which clumps of data carry (either explicitly or implicitly) accompanying metadata that identifies and describes the content of that data.  This is an important idea, but there’s no implication here of wireless connectivity.  I include this level in the discussion because the oft-used phrase “The Internet of Things” sometimes applies to Level 4 connectivity, and sometimes to Level 5 connectivity.

So where does the idea of 50 billion connected devices fit in?

An ABIresearch report, “Cellular M2M Connectivity Service Providers“, which is available (excerpted) from the website of Jasper Wireless, makes a good point:

The cost of cellular M2M solutions can be an inhibitor for some applications. Mainstream wireless modules range from approximately $25 to $90. These cost points make them difficult to integrate into some end devices, such as utility meters. A key reason for integration of ZigBee and other SRW (Short-Range Wireless) and PLC (Powerline Carrier) technologies into utility meters for AMI (Advanced Metering Infrastructure) applications is that many utilities do not feel a financially sound business case can be made for the integration of a cellular connection into every meter. Rather, a single meter, or concentrator, receives a cellular connection and is, in turn, connected to a group of local meters through less-expensive SRW or PLC connections.

In other words, there may be many devices whose individual wireless connectivity (Level 3 connectivity):

  • Is more complicated than an individual RFID tag (Level 4), but
  • Is simpler (and less expensive) than cellular modems (Level 2).

As time passes, the reducing cost of wireless modules will increase the likelihood that solutions will consider deploying them more widely.  However, at the same time, the simpler hardware options mentioned will also decrease in cost.

It’s for these reasons that I’m inclined to think that the number of cellular modems in 2020 will be less than the above ballpark figure of 50 billion.  But I’m ready to change my mind!

Footnote: A useful additional prediction data point has just been issued by Juniper Research:

The number of Mobile Connected M2M and Embedded Devices will rise to almost 412 million globally by 2014 with several distinct markets accounting for the increase in their number.

The markets include: Utility metering, Mobile Connected Buildings, Consumer & Commercial Telematics and Retail & Banking Connections. These areas will all show substantial growth in both device numbers and in the service revenues they represent, while Healthcare monitoring applications will begin to reach the commercial rollout stage from 2012.

“The most widespread category will be connections related to smart metering, driven partly by government initiatives to reduce carbon emissions,” says Anthony Cox, Senior Analyst at Juniper Research. Other areas, such as the healthcare sector, will ultimately see more potential in achieving service revenues, he says.

Blog at WordPress.com.