October 2007

Spreadsheet example for a small Kanban team

We’ve been talking about room-sized visual heijunka/andon boards for large teams, but it’s also reasonable to apply the kanban idea to smaller projects. Much of the current discussion about kanban in Agile circles seems to be of a simpler variety than the enterprise-scale concepts that David and I have been so concerned with. I think it’s important to recognize that kanban is more of a principle or pattern than it is a process. It’s not at all surprising that it would take different forms, and I would completely encourage that way of thinking.

Lean Production is different from either Mass Production or Craft Production. You can apply a kanban system to a mass production system like the SDLC in order to move it in the lean direction. You can also apply a kanban system to a craft production system like Scrum in order to move it in the lean direction. From two very different starting points, you can end up at the same outcome.

A few years back, I was developing a training program on Lean and Theory of Constraints concepts for product development. We started managing the project using Scrum, but the subject matter practically begged us to evolve the process in a leaner direction.

There were a number of things that were bothering me about Scrum:
  • I wanted to change the backlog more often than the timebox allowed
  • At any given moment, only one item in the backlog needs to be prioritized. Further prioritization is waste.
  • I wanted a specific mechanism to limit multitasking
  • I hated estimating
  • I hated negotiating Sprint goals
  • Sprint planning implicitly encourages people to precommit to work assignments
  • The ScrumMaster role is prone to abuse and/or waste
  • Burndowns reek of Management by Objective
  • Preposterous terminology

The thing that I wanted most was the smoothest possible flow of pending work into deployment, and Scrum just didn’t give me that. So, I proposed a simple spreadsheet-based method:

  • A daily standup
  • A single (roughly) prioritized backlog
  • Each person on the team is responsible for exactly two work items by the end of any standup
  • Every work item is associated with a workflow, and work item status is indicated by workflow state
  • A work item requires some kind of peer review and approval in order to be marked complete
  • New items can be added to the backlog at any time
  • There is a regular project review
  • The backlog must be regularly (but minimally) re-sorted
  • Status reporting is by cumulative flow only

One inspiration was a bit of folklore that programmer productivity peaks at 2 concurrent tasks. One task should be a high-priority primary task, and the other task should be a lower-priority task that you can work on when the first task is blocked.  Another inspiration was the Cumulative Flow Diagram (CFD). We had been applying the CFD to Sprints as an alternative to the burndown, which makes it perfectly obvious what Scrum’s limitations are with respect to flow. Limiting multitasking and deferring the binding of people to tasks could give us at least one smooth stripe on the diagram. The more you learn to use the CFD, the more useless the burndown chart seems. For all of the Scrum talk about feedback, the CFD is simply a better indicator.

I’ve learned some things since then that enhance the original design. My notion of managing WIP was more of a CONWIP style like Arlo Belshee’s Naked Planning than the workflow style we use on the big boards. I required a defined workflow, but not the division of labor that would make internal inventory buffers necessary.  The Boehm priority scheme might blow out the lead time, so I might do 1 small item + 1 larger item, or perhaps make separate service classes. I might also add a constraint indicator to the workflow.

The revised spreadsheet might look something like this:

Comments (9)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Kanban bootstrap

The goal of a kanban workflow system is to maximize the throughput of business-valued work orders into deployment. It achieves this by regulating the productivity of its component subprocesses.

I’ve spent the last few weeks bootstrapping such a kanban system for an enterprise software project. It’s a pretty big project, with over 50 people directly participating. Starting up a new project means making guesses about workflow states, resource allocation, productivity, work item size and priority criteria, and so on.

This project is too large for a single pipeline, so we have a nested structure that processes incoming scope in two stages. The first breakdown (green tickets) is by business-valued functional requirements that can be covered by a small functional specification and corresponding test specification. The second stage (yellow tickets) breaks down these “requirement” work packages into individual “features” that can be completed through integration by an individual developer (or pair) within a few days. The outer workflow is fed by a Rolling Wave project plan, but the flow itself is expected to be continuous. Scope decomposition is generally as “just-in-time” as is tolerable to the stakeholders.

Only time and real live performance data can tell you what you need to know to configure such a process correctly. It takes a while to move enough work through the system in order to obtain sufficient data to set the right process parameter values. Until then, you have to keep a sharp eye on things and engage in a lot of speculation about coming events. A particular challenge is with measuring latency. Latency can be a much bigger value than throughput. Worse, latency at the beginning of a big project is likely to be much worse than its stable value. New people working on a new project using a new process make for abundant sources of variation with both special and common causes. You have to see through all of this early noise in order to estimate the implied stable latency. Then you can get down to the hard work to make the worst of that variation go away, and buffer for the rest.

In comparison, bandwidth is easy to manipulate. For a stable process, adjusting bandwidth can have a relatively immediate impact on performance. But at the beginning, there’s pretty much nothing you can do but help push that first order through the system as quickly as possible. You have to prime the pump, and that is a different problem than regulating flow. The trouble with estimating bandwidth is that you won’t know if you got it right until you can measure latency. Overshooting bandwidth might result in a traffic jam in a downstream process that will stretch out your lead time. Undershooting bandwidth will result in “air bubbles” flowing through the process that confound your ability to configure downstream resources that are also ramping up.

The pressure is to overshoot. Everybody who’s available to work thinks that they ought to dive in and start hammering away. It’s hard to tell people to wait for the pull when there’s nothing to pull but slack. You have to imagine what the rate of pull is going to be, adjust the input valve accordingly, and try to get people to contribute anything they can towards reducing latency. If there is ever a good time to employ pair programming, this is it. But then, that’s just one more thing you have to try to convince people to do. When they’ve been champing at the bit, everybody wants their own piece of the pie.

Until you have meaningful throughput measurements, you have to make hands-on adjustments to bandwidth based on the live behavior of the workflow. If you see the traffic jam forming, close the valve. If you see the air bubble forming, open it up. It’s only later that you can let a well-sized buffer absorb the random variation without intervention.

If it were all up to me, I would always start one of these projects with a small pilot team. I’d let the workflow latency stabilize before ratcheting up bandwidth. Otherwise, there’s just too much variation to control without exceptional effort. Alas, it is difficult to explain why you should idle available resources in order to stabilize your process while the cold, hard wind of calendar time is blowing in your face.

But that is a battle for another day.

Comments (6)

Print This Post Print This Post

Email This Post Email This Post

Permalink

More news from the Kanban community

Eric Landes at Bosch is using a Kanban system for software maintenance:

Using Kanban to manage your maintenance backlog

More on Using Kanban to manage your maintenance backlog

Of course we’d love to hear more about Eric’s results!

Comments (0)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Dualities: A Pattern Language of Project Mangement

Change and risks of the unknown are the primary challenges of modern projects — changes in our team, our understanding of our problem, our market, etc. Experienced project members know that the “best process” for our project doesn’t fall neatly into formulas.  What was a successful strategy in one circumstance may fail completely in a seemingly similar circumstance. The relationship between our current situation and the best processes to deal with it are non-linear.

Lean thinking is one of the best starting points we have to systematically attack this problem with savvy about continuous improvement, shifting bottlenecks, systems thinking, statistical management, and respect for people on the front lines of the problem. And it’s been under-applied in getting us past the breakdown of traditional project management techniques in high-risk domains like software development.

But successful lean companies like Toyota have taken years or decades to build up institutional and cultural knowledge of where to start and how to evolve. Can we shorten this learning process for other companies or other domains?

Our unique circumstances and the changes around us require a kind of dance to make progress and stay balanced — sometimes steps forward, others to the side or back. This non-linear adaptation is very jarring for both our managers and our teams.

We need to find a way to empower people with the savvy to sense dissonance between the process we have vs. the process we need, give them the vocabulary to be able to discuss it, and the power to take action on that dissonance even if the solution is a step in a new direction or even a step back.

One of the best places to start is by attacking the belief that there is “one right process” for our teams.

Instead, we drive dialog around the spectrum of possibilities between any two poles or dualities: predictive vs. iterative management, larger vs. smaller batches, top-down vs. bottom up control, standardized vs. adaptative process, specialist vs. generalist teams, etc.

It’s not about choosing one or the other .. it’s about where you are on the spectrum.

As managers, we probe our teams, asking if they should be trying “more of this” or “less of that”. We can go forward or back on a spectrum, and we can step “to the side” by focusing our energies on a different duality.

What would help — in fact, what’s almost essential — is a pattern language to give names to abstractions and make dialog and discussion about where we are and where we’re going possible. Unfortunately, the PM pattern languages you’ll find today are based on absolutes (an assumption of one right process). We’re looking for one based on dualities — opposing forces or alternatives that we must balance to achieve the best-fit process for our circumstance, for this one point in time. We want terminology that ask us to think, adapt, and step in whatever direction our circumstance calls for.

Boehm (balance) and Cockburn (meta-methodologies) are wonderful starting points for this kind of thinking, at least in software developement. I’m sure there are others — please add a comment if you want to point out a resource.

Can we discover and build this pattern language of adaptive project management — and make it a an approachable, practical tool for today’s managers and project teams?

Comments (2)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Time-boxed iteration: an idea whose time has come and gone

Poor Winston Royce. The guy’s big idea is vilified because of a popular misunderstanding of its meaning. The great irony of the Waterfall story is that Royce himself was trying to describe a more feedback-driven overlapping process.

But somehow, the Waterfall strawman itself became the reference model, presumably because it appealed to the authoritarian command-and-control mass-production paradigm of the American business culture of the 1970′s. In a triumph of absurdity, that very culture was about to reach its nadir of quality, productivity, and profitability, as the dawn of an era of humiliating ass-kicking at the hands of the Japanese lean producers had just begun.

The contemporaneous emergence of the “structured” paradigm with all of its top-down orientation probably only encouraged the enthusiastically misguided technology managers of the day. After all, wasn’t that the very promise of computer technology? That it would make predictions, automate production and accounting functions, provide managers with awesome new powers of control so that we didn’t have to rely so much on those pesky, unreliable workers anymore?

Of course, many serious thinkers about software development were uneasy with that direction, because few of them ever thought of the problem that way in the first place. The phasist project management model was imposed from without by a zealous and out-of-control management culture, desperate to assert social dominance in the face of mounting economic failure.

Like any big new idea, it took quite a long time for the structured/phased paradigm to fully diffuse through the profession. Consequently, it also took quite a long time for the profession to come back with a well-formulated reaction. The most forceful expression of that reaction was probably Barry Boehm’s Spiral Model.

The Spiral Model was not the revolution, it was the counter-revolution, but the momentum of the Waterfall/SDLC was such that it took a decade for the Spiral to be fully realized as an enterprise-class methodology, in the form of the Rational Unified Process (RUP). Now, for those who were paying attention, the elements of RUP had been visibly gestating all the while, it just took some time to build up the fighting weight necessary to challenge the reigning champion. But for the Object-Oriented faithful, it had always been a given that iterative development was the only credible approach.

The challenges before RUP were formidable. It had to simultaneously replace structured analysis/design methods and phased project management methods. But change was in the air, and the explosion of new technology and the resulting financial speculation suddenly made it look very uncool to hang on to yesterday’s business processes. The dysfunction of phasist project management was abundantly clear to anybody who was paying even the slightest attention, so the world was finally ready for a credible contender. On the other hand, a generation of middle managers would never accept a methodology that threatened the corpulent bureaucracy that would allow them to rest and vest through a finance-fueled stock market orgy, so it was in everybody’s best interest to make RUP as bloated and artifact-laden as possible. In this way, RUP was destined not to last, but it did serve to introduce a big idea into mainstream thought:

Maybe 5 iterations of 100 requirements is better than 1 iteration of 500 requirements.

Just like the asteroid killed off the dinosaurs in order to make room for the birds and the mammals, the dotcom bubble accelerated the retirement of a generation of middle managers and left behind a world less hospitable to the flourishing of heavyweight processes. RUP’s unwieldy bloatocracy had outlived its usefulness to the scrappy survivors of the Great IT Catastrophe of 2001. Furthermore, what you probably learned from executing 100 requirements in an iteration is that they have a funny way of multiplying into 200 requirements. Building 200 requirements at a time is not really that much more fun than building 500 requirements at a time, you just learn the truth a little faster (i.e. your budget is screwed and your quality will be awful).

Our faith in iteration remained undeterred, but something about the artifact-and-scope-driven approach of the 1990′s was clearly not working. Fortunately, while the Object Establishment was busy making the enterprise safe for the Spiral Model, another group was determined to continue to drive the idea to a more extreme conclusion:

Maybe 50 iterations of 10 requirements is better than 5 iterations of 100 requirements.

and furthermore:

If 10 requirements typically take a team 2 weeks, maybe 50 iterations of 2 weeks is better than 50 iterations of 10 requirements.

My experience (and I believe that of many others) with scope-driven coarse-grained iteration is that it does not work well. On the other hand, the enduring popularity of Agile time-boxed iteration over the last eight years suggests very strongly that it does work well. The wheels turn slowly, but the champions of iteration were right all along. But this mixed history suggests a troubling question: why does 10 work when 100 doesn’t? What is a good size? What is the ideal size?

The answer to that question takes us right back to the beginning of our story. Back when the software development world was first trying to recreate the obsolescent mass-production culture of its day, the Japanese had already provided us with the answer. The ideal batch size is one. Over the last year, I’ve been operating software development kanban systems with and without Agile-style iterations. My goal as a lean workcell manager has been the realization of one-piece flow. What I’ve learned from my experience is:

In a well-regulated pull system, iterations add no value at all.

Just like RUP was a historical necessity to establish the legitimacy of the question, I believe that the first-generation Agile processes are a historical necessity to confirm that batch size is just as important to a development process as it is to a manufacturing process. And now that we know what the question really is, we find that we also know the answer. Iteration is only a transient concept to lead us to the deeper truths of pull and flow. The second generation of Agile processes will have no need for iterations as such. Continuous integration will be matched by continuous planning and continuous deployment. Live software systems will evolve before your eyes, and Little’s Law will rule the world.

Comments (12)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Close
E-mail It
Socialized through Gregarious 42