Pool queue

Comments (0)

Print This Post Print This Post

Email This Post Email This Post


Manufacturing systems have workflows and knowledge work systems have workflows (and little lambs eat ivy). There are principles that apply to workflows in general, regardless of whether they operate on bits or atoms, and that accounts for much of what we discuss here at Lean Software Engineering. There are also things that are completely different about information workflows. One of those things is the physical space necessary to operate the system. The nature of information space is fundamentally different from any physical process.

Fortunately for us, that often works to our advantage. It means we can manipulate our workflows and work products in ways that would be nonsensical to a traditional industrial engineer. Since most of the literature about Lean is still about moving atoms around, you have to pinch yourself every now and then as a reminder that moving bits around involves a different set of rules.

Bits or atoms, the notion of an inter-process inventory buffer is generally important to our scheduling methodology. Our overall goal is to minimize lead times for new work requests, and a great part of how we do that is by managing our in-process inventory very carefully. But an information inventory is different from a manufacturing inventory, in that it doesn’t occupy exclusive space in a meaningful way. Our information WIP might go into a virtual queue, effectively infinite in size, with no definite order for queuing or dequeuing, and no conflict between objects in the queue. A virtual queue can be random-in-random-out in a way that’s improbable for more spatially-oriented storage.

An issue that seems to come up regularly for development teams is how to distribute multiple work product types across the team’s resources. One approach says dedicate resources to each product type, say, a couple of “feature teams” and some bug fixers. Or a “front end” team and a “back end” team. Another approach says make a prioritization rule and assign all of the work to the common team. A kanban system enables us to use a hybrid approach that dedicates capacity to each work product type, without actually dedicating people.

Suppose we have a fairly simple, generic, 2-stage development process, common to all work product types:

Because it’s knowledge work, there’s too much variation between the two subprocesses to synchronize according to a clock interval, so we make an inter-process queue to absorb the variation:

The queue just holds the kanban, the actual inventory is still sitting in the same document, database, or code repository that it was in when somebody was working on it. It doesn’t matter where the real inventory is because nobody is competing for the storage.

Then we scale that process according to the available resources and demand:

But we can hybridize even further by exploiting some of our “virtual space” advantage. Because our “workcells” and “buffer stocks” don’t actually occupy any spatially constrained floor space, we can arrange them in any logical arrangement that suits us. In this case, we’re going to make a single pooled buffer that straddles both production lines:

Why would we do that? Pooling the variation across the queues for both lines allows us to reduce the total number of kanban in the system, and thereby reduce the lead time for the system as a whole. The dedicated queues each needed a minimum capacity of 2, for a total of 4, to avoid stalling. The combined queue only needs 3 to avoid stalling, because it is rare that both independent queues are simultaneously at their limit of 2. We can reduce the queue further by improving the variability of either of the surrounding processes. Again, it will be easier to reduce from 3 to 2 than it would be to reduce from 2 to 1.