dw2

1 January 2012

Planning for optimal ‘flow’ in an uncertain world

Filed under: Agile, books, critical chain, flow, lean, predictability — David Wood @ 1:44 pm

In a world with enormous uncertainty, what is the best planning methodology?

I’ve long been sceptical about elaborate planning – hence my enthusiasm for what’s often called ‘agile‘ and ‘lean‘ development processes.  Indeed, I devoted a significant chunk of my book “Symbian for software leaders – principles of successful smartphone development projects” to comparing and contrasting the “plan is king” approach to an agile approach.

But the passage of time accumulates deeper insight.  Key thinkers in this field now refer to “second generation lean product development”.  Perhaps paramount among these thinkers is the veteran analyst of best practice in new product development, Donald Reinertsen.  I’ve been influenced by his ideas more than once in my career already:

  • In the early 1990s, while I was a software engineering manager at Psion, my boss at the time recommended I read Reinertsen’s “Developing Products in Half the Time“. It was great advice!
  • In the early 200xs, while I was EVP at Symbian, I remember enjoying insights from Reinsertsen’s “Managing the Design Factory“.

I was recently pleased to discover Reinertsen has put pen to paper again.  The result is “The Principles of Product Development Flow: Second Generation Lean Product Development“.

The following Amazon.com review of the latest book, by Maurice Hagar, persuaded me to purchase that book:

This new standard on lean product and software development challenges orthodox thinking on every side and is required reading. It’s fairly technical and not an easy read but well worth the effort.

For the traditionalist, add to cart if you want to learn:

  • Why prioritizing work “on the basis of project profitability measures like return on investment (ROI)” is a mistake
  • Why we should manage queues instead of timelines
  • Why “trying to estimate the amount of work in queue” is a waste of time
  • Why our focus on efficiency, capacity utilization, and preventing and correcting deviations from the plan “are fundamentally wrong”
  • Why “systematic top-down design of the entire system” is risky
  • Why bottom-up estimating is flawed
  • Why reducing defects may be costing us money
  • Why we should “watch the work product, not the worker”
  • Why rewarding specialization is a bad idea
  • Why centralizing control in project management offices and information systems is dangerous
  • Why a bad decision made rapidly “is far better” than the right decision made late and “one of the biggest mistakes a leader could make is to stifle initiative”
  •  Why communicating failures is more important than communicating successes

For the Agilist, add to cart if you want to learn:

  • Why command-and-control is essential to prevent misalignment, local optimization, chaos, even disaster
  • Why traditional conformance to a plan and strong change control and risk management is sometimes preferable to adaptive management
  • Why the economies of scale from centralized, shared resources are sometimes preferable to dedicated teams
  • Why clear roles and boundaries are sometimes preferable to swarming “the way five-year-olds approach soccer”
  • Why predictable behavior is more important than shared values for building trust and teamwork
  • Why even professionals should have synchronized coffee breaks…

Even in the first few pages, I’ve found some cracking good quotes.

Here’s one on economics and “the cost of late changes”:

Our central premise is that we do product development to make money.  This economic goal permits us to use economic thinking and allows us to see many issues with a fresh point of view.  It illuminates the grave problems with the current orthodoxy.

The current orthodoxy does not focus on understanding deeper economic relationships.  Instead, it is, at best, based on observing correlations between pairs of proxy variables.  For example, it observes that late design changes have higher costs than early design changes, and prescribes front-loading problem solving.  This ignores the fact that late changes can also create enormous economic value.  The economic effect of a late change can only be evaluated by considering its complete economic impact.

And on “worship of conformance”:

In addition to deeply misunderstanding variability, today’s product developers have deep-rooted misconceptions on how to react to this variability.  They believe that they should always strive to make actual performance conform to the original plan.  They assume that the benefit of correcting a deviation from the plan will always exceed the cost of doing so.  This places completely unwarranted trust in the original plan, and it blocks companies from exploiting emergent opportunities.  Such behaviour makes no economic sense.

We live in an uncertain world.  We must recognise that our original plan was based on noisy data, viewed from a long time-horizon…  Emergent information completely changes the economics of our original choice.  In such cases, blindly insisting on conformance to the original plan destroys economic value.

To manage product development effectively, we must recognise that valuable new information is constantly arriving throughout the development cycle.  Rather than remaining frozen in time, locked to the original plan, we must learn to make good economic choices using this emerging information.

Conformance to the original plan has become another obstacle blocking our ability to make good economic choices.  Once again, we have a case of a proxy variable, conformance, obscuring the real issue, which is making good economic decisions…

Next, on flow control and the sequencing of tasks:

We are interested in finding economically optimum sequences for tasks.  Current practices use fairly crude approaches to sequencing.

For example, it suggests that if subsystem B depends on subsystem A, it would be better to sequence the design of A first.  This logic optimises efficiency as a proxy variable.  When we consider overall economics, as we do in this book, we often reach different conclusions.  For example, it may be better to develop both A and B simultaneously, despite the risk of inefficient rework, because parallel development can save cycle time.

In this book, our model for flow control will not be manufacturing systems, since these systems primarily deal with predictable and homogeneous flows.  Instead, we will look at lessons that can be learned from telecommunications networks and computer operating systems.  Both of these domains have decades of experience dealing with non-homogeneous and highly variable flows.

Finally, on fast feedback:

Developers rely on feedback to influence subsequent choices.  Or, at least, they should.  Unfortunately, our current orthodoxy views feedback as an element of an undesirable rework loop.  It asserts that we should prevent the need for rework by having engineers design things right the first time.

We will present a radically different view, suggesting that feedback is what permits us to operate our product development process effectively in a very noisy environment.  Feedback allows us to efficiently adapt to unpredictability.

To be clear, Reinertsen’s book doesn’t just point out issues with what he calls “current practice” or “orthodoxy”.  He also points out shortcomings in various first generation lean models, such as Eliyahu Goldratt’s “Critical Chain” methodology (as described in Goldratt’s “Theory of Constraints”), and Kanban.  For example, in discussing the minimisation of Work In Process (WIP) inventory, Reinertsen says the following:

WIP constraints are a powerful way to gain control over cycle time in the presence of variability.  This is particularly important where variability accumulates, such as in product development…

We will discuss two common methods of constraining WIP: the kanban system and Goldratt’s Theory of Constraints.  These methods are relatively static.  We will also examine how telecommunications networks use WIP constraints in a much more dynamic way.  Once again, telecommunications networks are interesting to us as product developers, because they deal successfully with inherently high variability.

Hopefully that’s a good set of tasters for what will follow!

1 October 2008

The student syndrome

Filed under: Agile, critical chain, Essay contest, predictability — David Wood @ 5:13 pm

Entries for Symbian’s 2008 Student Essay Contest have just closed. The deadline for submission of entries was midnight (GMT) on 30 September 2008.

The contest has been advertised since June. What proportion of all the entries do you suppose were submitted in the final six hours before the deadline expired? (Bear in mind that, out of a total competition duration of more than three months, six hours is about 1/400 of the available time.)

I’ll give the answer at the end of this article. It surprised me – though I ought to have anticipated the outcome. After all, for many years I’ve been telling people about “The Student Syndrome”.

I became familiar with the concept of the student syndrome some years ago, while reading Eliyahu Goldratt’s fine business-oriented novel “The Critical Chain“:

Like all Goldratt’s novels, Critical Chain mixes human interest with some intriguing ways of analysing business-critical topics. The ideas in these books had a big influence on the evolution of my own views about how to incorporate responsiveness and agility into large software projects where customers are heavily reliant on the software being delivered at pre-agreed dates.

Here’s what I said on the topic of “variable task estimates” in the chapter “Managing plans and change” in my own 2005 book “Symbian for software leaders“:

A smartphone project plan is made up from a large number of estimates for how long it will take to complete individual tasks. If the task involves novel work, or novel circumstances, or a novel integration environment, you can have a wide range of estimates for the length of time required.

It’s similar to estimating how long you will take to complete an unfamiliar journey in a busy city with potentially unreliable transport infrastructure. Let’s say that, if you are lucky, you might complete the journey in just 20 minutes. Perhaps 30 minutes is the most likely time duration. But in view of potential traffic hold-ups or train delays, you could take as long as one hour, or (in case of underground train derailments) even two hours or longer. So there’s a range of estimates, with the distribution curve having a long tail on the right hand side: there’s a non-negligible probability that the task will take at least twice as long as the individual most likely outcome.

It’s often the same with estimating the length of time for a task within a project plan.

Now imagine that the company culture puts a strong emphasis on fulfilling commitments, and never missing deadlines. If developers are asked to state a length of time in which they have (say) 95% confidence they will finish the task, they are likely to give an answer that is at least twice as long as the individual most likely outcome. They do so because:

  • Customers may make large financial decisions dependent on the estimate – on the assumption that it will be met;
  • Bonus payments to developers may depend on hitting the target;
  • The developers have to plan on unforeseen task interference (and other changes);
  • Any estimate the developers provide may get squashed down by aggressive senior managers (so they’d better pad their estimate in advance, making it even longer).

Ironically, even though such estimates are designed to be fulfilled around 95% of the time, they typically end up being fulfilled only around 50% of the time. This fact deserves some careful reflection. Even though the estimates were generous, it seems (at first sight) that they were not generous enough. In fact, here’s what happens:

  • In fulfilment of “Parkinson’s Law”, tasks expand to fill the available time. Developers can always find ways to improve and optimise their solutions – adding extra test cases, considering alternative algorithms and generalisations, and so forth;
  • Because there’s a perception (in at least the beginning of the time period) of there being ample time, developers often put off becoming fully involved in their tasks. This is sometimes called “the student syndrome”, from the observation that most students do most of the preparation for an exam in the time period just before the exam. The time lost in this way can never be regained;
  • Because there’s a perception of there being ample time, developers can become involved in other activities at the same time. However, these other activities often last longer than intended. So the developer ends up multi-tasking between two (or more) activities. But multi-tasking involves significant task setup time – time to become deeply involved in each different task (time to enter “flow mode” for the task). So yet more time is wasted;
  • Critically, even when a task is ready to finish earlier than expected, the project plan can rarely take advantage of this fact. The people who were scheduled for the next task probably aren’t ready to start it earlier than anticipated. So an early finish by one task rarely translates into an early start by the next task. On the other hand, a late finish by one task inevitably means a late start for the next start. This task asymmetry drives the whole schedule later.

In conclusion, in a company whose culture puts a strong emphasis upon fulfilling commitments and never missing deadlines, the agreed schedules are built from estimations up to twice as long as the individually most likely outcome, and even so, they often miss even these extended deadlines…

This line of analysis is one I’ve run through scores of times, in discussions with people, in the last four or five years. It feeds into the argument that the best way to ensure customer satisfaction and predictable delivery, is, counter-intuitively, to focus more on software quality, interim customer feedback, agile project management, self-motivated teams, and general principles of excellence in software development, than on schedule management itself.

It’s in line with what Steve McConnell says,

  • IBM discovered 20 years ago that projects that focused on attaining the shortest schedules had high frequencies of cost and schedule overruns;
  • Projects that focused on achieving high quality had the best schedules and the highest productivities.

Symbian’s experience over many years bears out the same conclusion. The more we’ve focused on achieving high quality, the better we’ve become with both schedule management and internal developer productivity.

As for the results of the student syndrome applied to the Symbian Essay Contest:

  • 54% of the essays submitted to the competition were received in the final six hours (approximately the final 1/400 of the time available)
  • Indeed, 16% of the essays submitted were received in the final 60 minutes.

That’s an impressively asymmetric distribution! (It also means that the competition judges will have to work harder than they had been expecting, right up to the penultimate day of the contest…)

Blog at WordPress.com.