Division of Labor in Lean Software Development Workflows

Comments (10)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Imagine we have team of three people, each working as generalists in an agile-style process. They are all qualified and competent workers, and correctly execute the process that they intend to follow. They break their work up into customer-valued features that each take a few days to complete through integration.

One developer is a true generalist. It takes her a couple of days to produce a testable functional specification and high-level design. It takes her a couple of days to produce a detailed design and working code. And it takes her a couple of days to verify and validate everything, from code correctness to functional acceptance.

Another developer is basically competent at all of these things, but he is a more stereotypically geeky programmer who can crank out high-quality code for most product features in a day, on average. It takes him a bit longer than the others to do the customer-facing part, usually about 4 days for analysis and high-level design. He’s also a bit slower with the validation, because again, if it ain’t writing code, he’s just not that excited about it. And for all the time he spends on specs, they are still mediocre, which results in rework in spite of his good coding skills.

The third developer, by contrast, has a sharp eye for design and is very friendly and sympathetic to the business and the customers. He knows the business so well that most of his specs only take a day to write. He’s a competent coder, but a bit old-school in his style and it takes him a bit longer with the current technology. Plus, his heart isn’t quite in it the way it used to be. It takes him three days to develop good code that everybody will feel comfortable with. On the other hand, since his specs are so clean and thorough, and he has a good rapport with the business, the testing usually goes very smoothly in about two days, also (tied for) the best on the team.

The team, on average, produces features with a cycle time of 6.67 days per feature. Overall, each team member produces at a similar rate.

2d + 2d + 2d = 6d
4d + 1d + 3d = 8d
1d + 3d + 2d = 6d
-----------------
               20 days / 3 features = 6.67 days/feature

It is a one-piece flow (per developer), and everybody is always busy with his or her feature. Nobody ever has to wait to start a new feature. Other than the personal slack built into the task times, capacity utilization is high.

But imagine if this team of generalists were allowed to focus only on the skills that they were best at:

1d + 1d + 2d = 4d
1d + 1d + 2d = 4d
1d + 1d + 2d = 4d
-----------------
               12 days / 3 features = 4 days/feature

Same people, same features, 40% improvement in productivity…

…if only it were that simple, because there is also a cost here. If they organize themselves as a pipeline, then that pipeline becomes subject to the Theory of Constraints. If they apply Drum-Buffer-Rope, then the testing task sets the pace at 2 days. That means the total cycle time per feature is:

2d + 2d + 2d = 6d

…still an improvement over the generalists, but only by 10% (!). On the other hand, capacity utilization is now low, because two people now spend half of their time idle, waiting for the drum. Since they are the same people who were cross-trained enough to work as generalists in the first example, is there anything that they can do to speed up the testing process which has been otherwise unimproved? Surely the answer must be yes.

Suppose each of the first two developers spends an extra half of a day doing additional work to optimize the testing process, so that testing only takes 1.5 days to complete instead of two. Introducing a pipeline might also introduce a new communication cost, but imagine that the extra half-day spent by each of the first two developers is in collaboration on the two features they have in process, both communicating and optimizing testability.

The total labor expended is now 4.5 days per feature, but all of the idle time has been stripped out, so that the total cycle time per feature is also 4.5 days. That is a real 33% improvement in throughput. Same people, same features, same skills, 33% faster. It is only an example, but is it not a realistic example?

What about training?

An enthusiastic and observant Agilist might, by this point, object that we could improve the productivity of the team in the first example by improving their skills with training. That is indeed true. We could provide such training, and it may very well yield improvement.

We could also provide training to the team in the second example, which might also yield improvement. What sort of improvement might we expect in each example?

The generalist model suggests that we help each team member improve their weak skills to bring them up to par with the rest of the team. In any model, there is something to be said for cross-training because it facilitates communication and allows the business to adapt to change. But investing in training to overcome weaknesses is a classic management mistake.

In First, Break All the Rules, Buckingham and Coffman make a compelling and well-researched case that the best return on investment in training comes from enhancing a worker’s strengths, rather than overcoming his weaknesses. The geeky coder may have a lack of charm or graphic design sensibility that no amount of training can ever overcome, but picking up a new coding technique or web application framework might pay immediate dividends.

The more specialized team has another built-in advantage in training because they simply get more practice with their currently deployed skills. The analyst gets more training on analysis, which he already has an aptitude for *and* then gets to spend all of his time practicing.

Back to our first team, imagine that we invest in generalist training, so that our

2 + 2 + 2 = 6
1 + 3 + 2 = 6
4 + 1 + 3 = 8

becomes

2 + 2 + 1 = 5
1 + 2 + 2 = 5
3 + 1 + 3 = 7

…for an improved average cycle time of (5+5+7)/3 = 5.67 days per feature. A generous result in training, for a significant outcome.

What about our “invest in strengths” scenario for the first team?

2   + 2   + 1 = 5
0.5 + 3   + 2 = 5.5
4   + 0.5 + 3 = 7.5

…not as good! Even with a very generous 50% improvement for all, we only get 6 days per feature. But what about the specialized team? If our original:

1 + 1 + 2 = 4

becomes

0.5 + 0.5 + 1 = 2

…and then we add back some collaboration overhead:

1 + 1 + 1 = 3

…well, then we are just smoking the generalist team. These are contrived examples, but they should still illustrate some of the advantages that go to small lean teams over small craft teams. The most effective teams have complementary skills and personalities, not homogeneous ones. Otherwise, why organize into teams at all?

Still not that simple? Enter the kanban

A problem with both examples is that they deal with average features. But nobody ever actually works on an average feature. They work on real features that can be averaged over time. Those features have variation, and sometimes a lot of it.

For this reason (and others), we don’t organize our workflow by role. We organize it by task or process, and let team members apply themselves to the workflow in the most efficient manner. They may even hand off the work at different transitions depending on the feature or the state of the pipeline. Such a soft division of labor preserves the efficiency advantage of each worker, while also allowing for variation and changes in circumstance. What matters most is that the work is done in the right order by the best resource available at the time, not who does what.

Pooling work-in-process according to the kind of asynchronous kanban system we’ve been discussing smooths out the flow of variable-duration work items, so that some of the variation in cycle times between processes is traded off for variation in inventory. Such a pooling strategy works better with more people than our examples, and also more people than the current common practice for agile teams. For a pipelined kanban system, we think that about 20 people is the sweet spot.