In the previous post we considered the notion of synchronizing a design workflow like an assembly line in order to realize our ideal case. A little skepticism about such an idea is surely justified, and the simplest interpretation of that idea has some undesirable consequences. Nonetheless, the idea still offers a lot of room to explore, and I am the curious sort, so let’s see how far we can take it!
It is natural to want to align a design workflow with the logical boundaries of the activities involved. However, if we are trying to synchronize work, it is unlikely that the logical boundaries of tasks will align well with the clock:
…which means that people will always be waiting for the bottleneck to finish its work:
It should be possible (even desirable) to break large activities into smaller, similarly-sized pieces:
You might even interleave some of the activities in order to smooth out the flow of information from one brain to another:
If the variation in the completion time of each of the tasks is under control, then the pipeline can flow.
A long pipeline of small steps will carry a lot of work-in-process. The cost of a pipeline stall will be lower, but the probability of a stall will be higher. Considerable slack may be needed to buffer variation in the cycle time for component tasks. However, related tasks can be combined into task groups:
…where the task group is internally self-organized and externally synchronized:
By recombining things in such a way, we can also apply Critical Chain-style buffering to each task group, in order to reduce the total amount or buffering required to keep things moving.
If all work is moving through a single pipeline, then a stall in that pipeline will disrupt everything. The penalty for a pipeline stall is reduced if there is more than one pipeline. Additionally, a single pipeline can only carry one pipeline’s worth of capacity. We can expand capacity and smooth out disruptions at the same time by adding a second pipeline:
The capacity of a single unstalled pipeline will be 100% minus whatever buffering is needed to optimize stalls vs slack time. Maybe full capacity is 80%. If a lone pipeline stalls, capacity is 0%. If one of two stalls, capacity is still 40%. 1:3 is 53% and so on. If there is a 25% chance of a stall in any clock tick, then there’s a 6% chance of 2:2 stalling, and so on.
Management overhead will scale linearly for a while. Will management overhead eventually scale to a point where a different organization is more efficient? Most likely.
We’re getting pretty creative with our efforts to make this pipeline idea work! I don’t know if we’ll ever be able to control work order variation enough to make this viable, but we’ve certainly identified some ideas that are worth exploring further. Next time, we’ll relax the requirement for synchronization and look at more ideas about using buffers to smooth out the variation between tasks and work orders.