Boehm’s Spiral Revisited

Twenty years ago this month, in response to the problems associated with waterfall-style approaches to software projects,

Barry Boehm proposed his Spiral Model of Software Development.

Which bore some resemblance to Deming’s “Plan, Do, Check, Act” cycle.

Boehm’s insights have had a huge positive impact on how we think about software development. But the spiral itself lost some of the beauty of Deming’s model: the simplicity, self-similarity at different scales, and the balance between activities in the quadrants. Which, perhaps, has caused Boehm’s model to be underused as a tool for introducing people to how creative engineering works. This is unfortunate, because the waterfall model, being more obvious, continues to be where most people start. Only then, after they have personally experienced the pain of struggling projects, do they search for a more appropriate model.

The Simple Sprial

Here is a simple love-child between Boehm’s and Deming’s views which has been very helpful to me in keeping a visual model in mind when thinking about how effective software development (or any creative engineering) really works.

Customer Plan
Test Design

How does this work?

  • The more iterative your development process, the more times you spiral around
  • You spiral inward from the high-level descriptions down to the lower level implementation details (note: this directionality is inverted from Boehm – this model doesn’t try to convey the amount of cost or work in each loop around the spiral)
  • As you spiral down, the activities change. “Design” at the high level might be on paper. But as you spiral down, design is about turning those paper documents into executable code. Same for the other quadrants.

Quadrants

  • Customer. What does the customer think? In one of the better trends of the last 20 years since Boehm’s paper, agile methodologies have recognized the customer as an essential direct participant of the development process. We can try to guess what the customer ultimately will find valuable. But if we don’t regularly check back with them, we’ll get enough wrong to sink our product and company over time.
  • Plan. What do we plan to do? This includes requirements analysis, priorities, risks, and schedules. At the very high level, it may be corporate goals. At the very low level, it might be writing an automated functional test before writing the code to make that test pass.
  • Design. How will we do it? At the high level, design is done via documents, diagrams, and discussion. At the lowest level, design is expressed as the executable code that constitutes the product.
  • Test. Have we done it right? At the high level, we discuss and review ideas and documents. At the lowest level, we execute tests against the functioning product.

The four quadrants align with how we tend to specialize our people and organizations as we grow. In a Microsoft organizational model, it aligns with customer, program management, development, and test.

The “Simplistic Spiral”

The simple spiral is useful, because it is flexible enough to encompass many approaches to development. Take the waterfall model from the top of this post, and wrap it into one loop through the spiral, and you get “the simplistic spiral.”

Customer Plan
Test Design

Wouldn’t it be nice if projects could reliably just work this way?

But we know if we apply this model to a large product, we’re nearly certain to have a disaster on our hands where many assumptions made in planning are discovered to be poor in design or test.

But apply it to a queue of appropriately-sized (small) and well-understood functional requests, and it may be an appropriate model for each kanban in a lean production workflow (or perhaps each kanban should be two or more loops round the spiral). In any case, the model is helpful in visualizing all these cases.

The “Product Spiral”

It also helps us create useful visualization of product lifecycle models at various scales.

Company
Product
Project
Feature
Change

At the company level, we are constantly cycling: seeing what the customer/market reaction is to our products, planning new products and enhancements, designing and testing them.

At the product level, it’s the same, but from the perspective of the evolution over a lifecycle. But even when a product enters later lifecycle phases like maintainence, the model is still the same: the customer finds bugs, we prioritize, fix, and test them — and it goes back to the customer for the cycle to start again.

At the feature level, we recognize that this has a lifecycle of its own. Before committing a feature to a product, all aspects (including feedback from the customer) should be covered, likely with several iterations. Perhaps one of the secrets of success to many open source ecosystems is that they encourage/allow individual features to evolve independently and iteratively, before committing to integrate them.

At the change level, we have gotten the granularity small enough that each change may appear to be its own mini-waterfall of plan, design, and test. But even that is a simplification — chances are, the person making the change looped through many interrelated planning, design, and test alternatives in their head before committing the change.

Applications

In future posts, we’ll apply this model to take a look at other aspects of the engineering process.

So — is this a useful model for thinking about engineering, particularly software? Or is this model dangerously simplistic for helping to think about how your organization works?

Comments (2)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Spiraling Into Control

[This was originally posted March 7, 2005 on my personal blog. Stumbled across it, and felt it might deserve a post here] 

The other week I attended a panel discussion. This was part of a pilot of a larger course for program managers. Before the panel began, the instructor was relating a conversation where a developer on a project was speaking of it as “sprialing into control.” The instructor left open the possibility that he thought the developer was crazy. The room of program managers laughed. Thoughtfully.

What a great phrase.

We want to clamp down to get our projects under control. Enforce rules to get repeatable. We want to keep our teams on the shortest path from A to B.

But the interesting projects haven’t been to B before.

And they’re dealing with shifting human dynamics on the team, discoveries that don’t reveal themselves until uncomfortably late, and a world around them that isn’t standing still.

We can’t leap to control and stay there. We can only sprial close. And, with constant effort and feedback, we hope to stay close.

Short iterations. Small increments. Minimized work-in-progress. Communication and retrospection. Discretion to adjust the process to keep breaking bottlenecks.

Control through feedback, not prescription.

Comments (2)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Better estimates with Wideband Delphi

Dates and deadlines are an essential human element of project management. People work better if they have challenging but realistic schedules to work against.

The trick is “challenging but realistic.” In software, we know there is wide variation in our estimates, because we are almost always creating something unique (if it’s been done before, we just copy the bits). And we have a systemic underestimation bias, because there are lots of ways to cut scope or cut corners in software, and the intangible nature of it all makes anything seem possible.

Unfortunately, when schedules are no longer realistic, it will quickly destroy a project: causing cynicism, demotivation, short-cuts, bad decisions, unwillingness to respond to new information, loss of honesty and trust, and other problems which will fester. We could avoid these pitfalls if only we could estimate better. There are good books on this, including McConnell’s Software Estimation: Demystifying the Black Art and Wiegers’ Practical Project Initiation.

Out of all the techniques covered in those books, one widely-used technique is particularly effective for those critical early estimates of large, not-yet-well-understood projects. Estimates upon which we base our go/no-go decisions and early expectation-setting for upper management and customers.

It’s called Wideband Delphi, and here is a simple spreadsheet template and guide for the Wideband Delphi technique. Take a look, and let us know if this is useful to you and your groups.

Comments (5)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Not all activities are best handled by generalists

This is part 6 of 10 Pitfalls of Agility on Large Projects. In part 5, we talked about how hundreds of people can’t check into “main” every day, and what to do about it.

When it comes to generalists vs. specialists, there are huge benefits to maximizing our use of generalists for creative/design engineering — in contrast to manufacturing activities, where specialization is always the way to go.

But there is a limit on how far you can take generalists, and there is a pro-specialist argument for design, too — as Corey lays out his case in his thoughtful one-piece flow post series.

I may not buy the broad knocks against “craft production” in that post; or that systems designed around generalists are unlikely to qualify as software engineering. But there is clearly a cross-over point where adding specialists of various types starts becoming a win, and eventually a big win. A good way to analyze where specialization can help is being thoughtful about a lean production flow.

Hand-offs, error, and waste: first, the case against specialization

Specialization in design suffers from a catch-22 problem: Design specifications are not truly complete until they are rendered at full detail (in software, that means coded). But incomplete specification causes errors during hand-offs due to misunderstandings and disagreements. Those misunderstandings and the time to resolve them are magnified to a surprising extent by hand-offs between specialists who don’t share a common foundation of knowledge or perspective.

Boehm Sprial

And because design is a knowledge discovery and creation process, we can’t actually specify a design fully up-front (if we could, it would mean our design isn’t forging much new ground). The most effective solution is to spiral down into a design ala Boehm: breadth-first circling around all aspects of a project (these aspects are potential specialized roles) refining the design from high-level to low-level details, with depth-first spikes into areas with more unknowns or risk.

Value Stream Map

Every specialist involved in the design requires an additional step in our value-stream map, repeated for each loop we take around the spiral (with some opportunity for parallel processing). If we think about our value stream map for this process, we realize how this can explode our cycle time, including waiting time for people to get back with feedback.

And every specialist is a resource that now needs to be balanced, a potential bottleneck. In a generalist-dominated system, it is much easier to attack bottlenecks by shifting resources.

Perhaps most importantly, as we add specialists with different backgrounds, we have less project knowledge that can be implicit or informal: to reduce misunderstanding, we have to capture more of that knowledge explicitly on paper or via longer, bigger meetings. Achieving consensus on decisions takes much longer. None of this directly adds any value for the customer.

So when does specialization start making sense?

  1. When we can get the benefits of both generalization and specialization. We do this by hiring generalists, but having them rotate into specialized roles (like test, project management, architecture, etc.) for the duration of a project or two. This is an excellent strategy to gain the benefits of focus through specialization, while retaining a flexible, low-overhead organization. TSP is a well-known adopter of this strategy. This is also a variation of the recommendation here of division of labor in lean software development workflows.
  2. As our projects grow, we can bring in specialists that help the project without gating the inner design loop.
    • System test is one of the first and most common of these opportunities. Because at this point the system has been fully specified (that is, the implementation is “complete” if not error-free) there is a clear, specialized role to fulfill without impacting the design loop.
    • Back-end value adding steps like localization (assuming the core group of generalists knows how to create a localizable product).
    • Specialized roles which are orthogonal to the design loop, like project management. Note that some companies make the mistake of defining hybrid roles like “Program Manager” that encompass project management, requirements analysis, high level design, etc. (Microsoft being a poster child). But this causes all the problems of specialists in the design loop.
  3. Finally, specialization is unavoidable when we can no longer find or afford generalists who can handle the typical roles in your project (interfacing with the customer, design, implementation, effective testing, a degree of self-management, etc.). Some organizations (Google would be a poster child) scale to hundreds or thousands of people without having their hiring and organizational structure over-specialize along those lines. But we’re not all Google. When our generalists have had trouble over time covering some aspect of the project, it’s time to accept the other costs to get the benefits of specialists focused on those problems.

Comments (3)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Hundreds of people can’t check directly into main every day

This is part 5 of 10 Pitfalls of Agility on Large Projects. In part 4, we talked about how the customer doesn’t want a release every month, and what to do about it.

Chris Seiwald and Laura Wingerd have had a great paper that’s been out for almost a decade which describe best practices for branching on large projects.

Here’s a similar (but ugly, hand-drawn) take on the key diagram in that article which shows the key criteria for branching and checkins — creating separate branches whenever the risks or goals of a set of checkins deviate significantly from the standards of the current main codeline.

Branching Model

The model requires a source code control system with good branch/merge support. Chris’ Perforce has been the best commercial choice for some time (because of its simplicity and scalability), but open source alternatives have been getting better. Subversion is a great open source choice, but has had subpar merging. This is slated to get better in upcoming version 1.5. And the whole Distributed SCM family of tools (git) makes this their forte.

In a future continuation of this post, we’ll look at tying branching best practices into the larger picture of robust continuous integration.

Comments (1)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Close
E-mail It
Socialized through Gregarious 42