dw2

1 May 2010

Costs of complexity: in healthcare, and in the mobile industry

Filed under: books, business model, disruption, healthcare, innovation, modularity, simplicity — David Wood @ 11:56 am

While indeed there are economies of scale, there are countervailing costs of complexity – the more product families produced in a plant, the higher the overhead burden rates.

That sentence comes from page 92 of “The Innovator’s Prescription: A disruptive solution for health care“, co-authored by Clayton Christensen, Jerome Grossman, and Jason Hwang.  Like all the books authored (or co-authored) by Christensen, the book is full of implications for fields outside the particularly industry being discussed.

In the case of this book, the subject matter is critically important in its own right: how can we find ways to allow technological breakthroughs to reduce the spiralling costs of healthcare?

In the book, the authors brilliantly extend and apply Christensen’s well-known ideas on disruptive change to the field of healthcare.  But the book should be recommended reading for anyone interested in either strategy or operational effectiveness in any hi-tech industry.  (It’s also recommended reading for anyone interested in the future of medicine – which probably includes all of us, since most of us can anticipate spending increasing amounts of time in hospitals or doctor’s surgeries as we become older.)

I’m still less than half way through reading this book, but the section I’ve just read seems to speak loudly to issues in the mobile industry, as well as to the healthcare industry.

It describes a manufacturing plant which was struggling with overhead costs.  At this plant, 6.2 dollars were spent in overhead expenses for every dollar spend on direct labour:

These overhead costs included not just utilities and depreciation, but the costs of scheduling, expediting, quality control, repair and rework, scrap maintenance, materials handling, accounting, computer systems, and so on.  Overhead comprised all costs that were not directly spent in making products.

The quality of products made at that plant was also causing concern:

About 15 percent of all overhead costs were created by the need to repair and rework products that failed in the field, or had been discovered by inspectors as faulty before shipment.

However, it didn’t appear to the manager that any money was being wasted:

The plant hadn’t been painted inside or out in 20 years.  The landscaping was now overrun by weeds.  The receptionist in the bare-bones lobby had been replaced long ago with a paper directory and a phone.  The manager had no secretarial assistance, and her gray World War II vintage steel desk was dented by a kick from some frustrated predecessor.

Nevertheless, this particular plant had considerably higher overhead burden rates than the other plants from the same company.  What was the difference?

The difference was in the complexity.  This particular plant was set up to cope with large numbers of different product designs, whereas the other plants (which had been created later) had been able to optimise for particular design families.

The original plant essentially had the value proposition,

We’ll make any product that anyone designs

In contrast, the newer plants had the following kind of value proposition:

If you need a product that can be made through one of these two sequences of operations and activities, we’ll do it for you at the lowest possible cost and the highest possible quality.

Further analysis, across a number of different plants, reached the following results:

Each time the scale of a plant doubled, holding the degree of pathway complexity constant, the overhead rate could be expected to fall by 15 percent.  So, for example, a plant that made two families and generated $40 million in sales would be expected to have an overhead burden ratio of about 2.85, while the burden rate for a plant making two families with $80 million in sales would be 15% lower (2.85 x 0.85 = 2.42).  But every time the number of families produced in a plant of a given scale doubled, the overhead burden rate soared 27 percent.  So if a two-pathway, $40 million plant accepted products that required two additional pathways, but that did not increase its sales volume, its overhead burden rate would increase by 2.85 x 1.27, to 3.62…

This is just one aspect of a long and fascinating analysis.  Modern day general purpose hospitals support huge numbers of different patient care pathways, so high overhead rates are inevitable.  The solution is to allow the formation of separate specialist units, where practitioners can then focus on iteratively optimising particular lines of healthcare.  We can already see this in firms that specialise in laser eye surgery, in hernia treatment, and so on.  Without these new units separating and removing some of the complexity of the original unit, it becomes harder and harder for innovation to take place.  The innovation becomes stifled under conflicting business models.  (I’m simplifying the argument here: please take a look at the book for the full picture.)

In short: reducing overhead costs isn’t just a matter of “eliminating obvious inefficiencies, spending less time on paperwork, etc”.  It often requires initially painful structural changes, in which overly complex multi-function units are simplified by the removal and separation of business lines and product pathways.  Only with the new, simplified set up – often involving new companies, and sometimes involving “creative destruction” – can disruptive innovations flourish.

Rising organisational complexity impacts the mobile industry too.  I’ve written about this before.  For example, in May last year I wrote an article “Platform strategy failure modes“:

The first failure mode is when a device manufacturer fails to have a strategy towards mobile software platforms.  In this case, the adage holds true that a failure to strategise is a strategy to fail.  A device manufacturer that simply “follows the wind” – picking platform P1 for device D1 because customer C1 expressed a preference for P1, picking platform P2 for device D2 because customer C2 expressed a preference for P2, etc – is going to find that the effort of interacting successfully with all these different platforms far exceeds their expectations.  Mobile software platforms require substantial investment from manufacturers, before the manufacturer can reap commercial rewards from these platforms.  (Getting a device ready to demo is one thing.  That can be relatively easy.  Getting a device approved to ship onto real networks – a device that is sufficiently differentiated to stand out from a crowd of lookalike devices – can take a lot longer.)

The second failure mode is similar to the first one.  It’s when a device manufacturer spreads itself  too thinly across multiple platforms.  In the previous case, the manufacturer ended up working with multiple platforms, without consciously planning that outcome.  In this case, the manufacturer knows what they are doing.  They reason to themselves as follows:

  • We are a highly competent company;
  • We can manage to work with (say) three significant mobile software platforms;
  • Other companies couldn’t cope with this diversification, but we are different.

But the outcome is the same as the previous case, even though different thinking gets the manufacturer into that predicament.  The root failure is, again, a failure to appreciate the scale and complexity of mobile software platforms.  These platforms can deliver tremendous value, but require significant ongoing skill and investment to yield that kind of result.

The third failure mode is when a manufacturer seeks re-use across several different mobile software platforms.  The idea is that components (whether at the application or system level) are developed in a platform-agnostic way, so they can fit into each platform equally well.

To be clear, this is a fine goal.  Done right, there are big dividends.  But my observation is that this strategy is hard to get right.  The strategy typically involves some kind of additional “platform independent layer”, that isolates the software in the component from the particular programming interfaces of the underlying platform.  However, this additional layer often introduces its own complications…

Seeking clever economies of scale is commendable.  But there often comes time when growing scale is bedevilled by growing complexity.  It’s as mentioned at the beginning of this article:

While indeed there are economies of scale, there are countervailing costs of complexity – the more product families produced in a plant, the higher the overhead burden rates.

Even more than a drive to scale, companies in the mobile space need a drive towards simplicity. That means organisational simplicity as well as product simplicity.

As I stated in my article “Simplicity, simplicity, simplicity“:

The inherent complexity of present-day smartphones risks all kinds of bad outcomes:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window;
  • Smartphone application development may become difficult, as developers need to juggle different programming interfaces and optimisation methods;
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.  The number one issue for the mobile industry, arguably, is to constantly find better ways to tame this complexity.

The companies that are successfully addressing the complexity issue seem, on the whole, to be the ones on the rise in the mobile space.

Footnote: It’s a big claim, but it may well be true that of all the books on the subject of innovation in the last 20 years, Clayton’s Christensen’s writings are the most consistently important.  The subtitle of his first book, “The innovator’s dilemma”, is a reminder why: “When new technologies cause great firms to fail“.

15 December 2008

Accelerating out of molasses

Filed under: disruption, modularity, Nokia, time to market — David Wood @ 4:00 pm

Michael Mace has posted a characteristically thoughtful article on his Mobile Opportunity blog:

Every time I think about Nokia and Symbian, I can’t help picturing a man knee-deep in molasses, running as fast as he can. He’s working up a sweat, thrashing and stumbling forward, and proudly points out that for someone knee-deep in molasses he’s making really good time…

The posting is entitled “Nokia: Running in molasses“. It arose from Mike reflecting on some of what he heard at the recent Symbian Partner Event (SPE) in San Francisco. The posting is well worth reading. I appreciate the issues that Mike raises. These issues are significant. But as you might expect, I have a somewhat different perspective on some of them.

Large software doesn’t mean that software development has to go slow

Charles Davies, Symbian CTO, pointed out to us that Symbian OS has about 450,000 source files. That’s right, half a million files. They’re organized into 85 “packages”…

There are economies of scale as well as dis-economies of scale. The point of the careful division of the Symbian Platform software into packages is to enable each of the resulting packages to have greater autonomy – and, therefore, to progress more quickly.

There’s one subtle point here. Many of the packages include teams from both Symbian and from S60. This applies to cases where the separation of functionality between the two formerly distinct companies resulted in sub-optimal development. Now that Nokia’s acquisition of Symbian has completed, these boundaries can be intelligently re-designed.

Disruption, size, and organisational design

This brings me to a comment on the ideas of Clayton Christensen. Here’s another extract from Mike Mace’s article:

If the folks at Nokia really think they are well positioned to crush Apple, they need to go re-read The Innovator’s Dilemma. Being big is not a benefit in a rapidly-changing market with emerging segments.

Agreed, being big is no guarantee of being able to respond well to changing market conditions. That’s why I’m personally a big fan of Agile. Agile can help established companies (whether large or small) to launch and embrace disruptions. As Scott Anthony, one of Christensen’s co-authors, has recently commented in his article “Can Established Companies Disrupt?“:

The data suggests that it is increasingly common for an established company to launch disruptive innovations. More and more incumbents are learning how to embrace disruptive principles such as:

  • Put the customer, and their important, unsatisfied job-to-be-done at the center of the innovation equation
  • Embrace the power of simplicity, convenience, and affordability
  • Create organizational space for disruptive growth businesses
  • Consider innovation levers beyond features and functions
  • Become world class at testing, iterating and adjusting

As I said, being big can have its advantages as well as its disadvantages, so long as individual parts of the company have sufficient autonomy. The hard part is knowing when to seek closer ties, and when to seek looser ties. One of Christensen’s later books had some very interesting advice on that score. I can’t remember for sure whether that book was “The Innovator’s Solution” or “Seeing What’s Next“. The advice was that where performance remains a critical differentiator, you should look for a tight coupling. Where performance is already “good enough”, you should seek a loose coupling – with open APIs and a choice of alternative solutions.

As soon as I read these words, some time around 2003-2004, I had a gut reaction that, one day, the relevant teams in Symbian software engineering and S60 software engineering ought to be combined. It took a long time for that insight to be fulfilled. But now that it’s happening, there’s plenty of good reason to expect the resulting combined company to start accelerating its development.

Development in parallel with change

Back to Mike Mace, commenting on the SPE presentation by Charles Davies:

Davies talked about the substantial challenges involved in open sourcing a code base that large. He said it will take up to another two years before all of the code is released under the Eclipse license. In the meantime, a majority of the code on launch day of the foundation will be in a more restrictive license that requires registration and a payment of $1,500 for access. There’s also a small amount of third party copyrighted code within Symbian, and the foundation is trying to either get the rights to that code, or figure a way to make it available in binary format.

Those are all typical problems when a project is moving to open source, and the upshot of them is that Symbian won’t be able to get the full benefits of its move to open source until quite a while after the foundation is launched. What slows the process down is the amount of code that Symbian and Nokia have to move. I believe that Symbian OS is probably the largest software project ever taken from closed to open source. If you’ve ever dealt with moving code to open source, you’ll know how staggeringly complex the legal reviews are. What Nokia and Symbian are doing is heroic, scary, and incredibly tedious. It’s like, well, running in molasses.

I have four comments on this:

  1. Even though the full transition to open source may take up to two years from the initial announcement of the foundation (that is, until mid 2010), there are plenty of other things happening in the meantime – with a series of interim releases that progressively convert more of the software from the community-source Symbian Foundation Licence to the open-source Ecliplse Public Licence;
  2. There will be new technologies and new UI features in these interim releases;
  3. The interim releases should already achieve at least some of the considerable benefits of both open source and community source; the first packages which will become available under the EPL are being chosen so that independent developers can do useful things with some of them (including contributing back working code enhancements);
  4. The legal reviews may initially seem daunting, but with the help of modern code-scanning tools and with the advantage of “practice makes perfect”, the process is likely to speed up considerably along the way.

Cool stuff in the lab

Mike ends the main part of his article as follows:

Nokia still has a lot of time to get it right. But do they really understand what needs to change? I can’t tell, because all I usually get from them is monologues on how big their business is and how much cool stuff they have in the lab.

I accept that analysts must inevitably hedge their bets, regarding the extent of future success of the main mobile operating systems, until a period of proving over the next 12-24 months has shown what these operating systems can actually accomplish. I eagerly look forward to the day when more of the Symbian and Nokia roadmap of stunning new technology, new services, and new user experience attains greater visibility. When that happens, analysts are likely to come down off the hedge.

My own expectation is that the moves to integrate Symbian and Nokia, and to create the Symbian Foundation, will see a substantial speed up of innovation over that time period. But I’m not taking this for granted. After all, I’m well aware of the original subtitle of “The Innovator’s Dilemma”: “When new technologies cause great firms to fail“.

Blog at WordPress.com.