…you’d rightly look upon them with suspicion. Anybody who participated in the rise and fall of the dotcom era should have a visceral understanding of that particular Voice of Insanity. For a time, credulous financial cheerleaders abandoned time-tested measures of value in favor of a “growth-at-any-cost” mentality that treated market share and projected future revenue as a substitute for profitability. But as they say, when you’re losing money on every transaction, you can’t “make it up in volume.”
Much like “dotcom valuation,” “software project accounting” is almost an oxymoron by historical standards. Most businesses don’t have a clue about how to assign value to the individual functions of software products. Degenerate-waterfall project management is mostly to blame for this, by creating an impenetrable hairball of process artifacts that totally obscure the marginal cost of production of new features. So most project accounting ends up as some black hole of operating expense with no visibility into variable costs.
Part of the genius of flow-based management, then, is precisely that it makes costs more visible.
The discipline of defining orthogonal features and subjecting them to end-to-end flow means that both the utility and cost of each feature becomes something observable. This gives a program- or engineering manager a powerful tool to manage his or her resources, in the form of the throughput equation:
Throughput = Work-in-process / Cycle time
In the right hands, such a tool is a force for dramatic productivity growth. But like most powerful tools, there is also danger. Giddy with the excitement of new-found causality in pursuit of a goal, an eager engineering manager may realize that he can goose throughput by adding capacity.
Over the past three decades, Brooks’ Law has often helped to check the unconstrained growth of teams, by illustrating the limit of the benefit to adding capacity to a project that is in process. But Brooks’ Law depends upon a construction paradigm of development, where a “project” is created to build a “product” that has a pre-defined beginning and end. The flow paradigm, however, conceives of development as a continuous process of incremental delivery, like a design factory turning out an endless variety of new features. Such a factory, once started, will continue to operate as long as it is profitable to do so (or more so than the alternatives). And the design factory paradigm, properly managed, does not suffer from the same limitations on scalability that Brooks so effectively described.
So, once our newly empowered manager realizes that he can add people more easily than before, he may set out to do precisely that. The incentive for managers to do this is obvious. The old culture of western corporate management rewards managers primarily for their scope of control. So, a manager of 1000 employees will likely be more highly compensated than a manager of 10 employees. Any economist will tell you that people respond to incentives, so if you incentivize empire-building, that’s what you will get. The danger of the throughput equation is that it may encourage managers to increase their organizational scope by increasing capacity, with little investment value to the business. Even if the manager is operationally competent, and incurs no operational inefficiency from growth, he is still incurring opportunity cost to the rest of the business by diverting resources from more profitable activities. While he might be producing more value, he is also producing proportionally more waste, which is pure cost.
A more capable manager may see that the other way to increase throughput is to reduce cycle time. An increase in throughput without an increase in inventory or operating expense is pure profit. You are producing more value with the same resources and less waste. So the second priority for the engineering manager (the first is process control!) is to identify any opportunity to reduce the time to design and deploy a typical feature. Our big tools here will be the Value Stream Map, the Theory of Constraints, and the Kaizen Event.
The way I’d do it is: any time the design process reaches a state of sustained statistical control, that’s the time to hold a Kaizen Event. You should have used the time since the last event to map out your value stream and try to identify the constraint. Present your data to the team (and make sure they’re involved in gathering it in the first place) and let them go at it in a retrospective meeting. The ideas that have the most promising implications for relieving the bottleneck should be implemented immediately. Implementing the ideas might throw the system out of statistical control for a little while, so you’ll have to wait until it stabilizes and then…do it all again!
At some point, you may hit some plateau of diminishing returns from this process. Or the growth in market opportunity is faster than the growth rate of your productivity (you should be so lucky!). Only in these cases is it reasonable to expand your capacity.