tool

Accounting for opportunity and cost

A universal challenge for technology companies is having more demands to improve the product than we can implement.  This problem gets bigger as we get more successful.

Because our capabilities and those demands shift over time — if we want to maximize the value of what we deliver to the customer, then we want to tackle things in a prioritized fashion, knowing we simply won’t get to it all in any one release cycle.

And this prioritized breakdown better supports moving to a lean/ kanban workflow.

How to best analyze and prioritize all these requests, then? We want to be able to do it systematically, regularly, and have these priorities reflect (the best we can) the sweet spot of both the market value of an idea, and the engineering cost to create it1.

This last part is difficult, but especially important.

Say we have a few alternative design ideas for satisfying a need. Because it may appear “anything is possible” in software, we may be tempted to try to deliver exactly “what the customer wants.” Let’s say that’s design A.  Unfortunately, developing design A could be a bad choice, because it could easily take many times the effort of a very similar and nearly as appealing choice B, just because it asks for some functionality that doesn’t neatly fit into our existing system and available components around us.  A fundamental principle of software engineering is that even a small shift in requirements can cause a huge shift in cost and/or risk of development.

It’s similar to choosing to fit within the limitations of available off-the-shelf parts for a home remodel — it is obviously much cheaper than buying custom.

But “off the shelf” is an unfortunately non-obvious, slippery concept in the shifting world of software.  To reuse something, whether it’s your own code or someone else’s, there’s often a chain of “if”s you have to walk down: “We can save 2 months of effort by using this nice library IF we assume customers want Windows .NET only; IF we use C#; IF we take a dependency on .NET Framework 3.5 SP1 to get the libraries we need; IF we only do features A, B, D, and E, and punt on C, because it would require we rewrite and replace the library…” and every such path is usually followed by a “BUT, if we do that, then we are locked into not doing …”.

In most cases, developers on your projects won’t know all the alternatives and consequences until they’ve delved deeply into the design and usually into the code.

This is one of many reasons why two strategies are so important in getting (re)prioritization right for software engineering:

  1. Include both marketing and engineering equally in the decisions of what features to prioritize.  Systemically account for both world views, while coming to one decision.
  2. Plan to adjust your feature set over the course of the project, as your team learns new things.  By allowing for small adjustments and sacrifices in the requirements, you can dramatically lower the project cost and risk.

#2 is common advice for a host of reasons.  #1 can be achieved with the right people, and a structured decision making method like perpetual multivoting, or a Delphi decision making method that will be described in a future post.

When the right people are paired with the right process in this way, the spigot of innovation will open wider, benefiting both you and your customers.

Notes

(1) We actually want the sweet spot of the triumverate of Voice of the Customer (VOC),  Voice of Technology (VOT), and Voice of the Business (VOB) — so we can prioritze the work that will deliver the most value to our customer, with the lowest effort and risk, with the best financial outcome for our company.  In many companies, we break these perspecives down (for better or worse) into specializations: sales represents the pure VOC, marketing VOB, and engineering VOT.  So the challenge is — finding a sweet spot means getting perspectives from several people, often with very different backgrounds and communication styles, who wouldn’t normally be caught dead getting drinks at the same pub with each other …

Comments (2)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Spreadsheet example for a small Kanban team

We’ve been talking about room-sized visual heijunka/andon boards for large teams, but it’s also reasonable to apply the kanban idea to smaller projects. Much of the current discussion about kanban in Agile circles seems to be of a simpler variety than the enterprise-scale concepts that David and I have been so concerned with. I think it’s important to recognize that kanban is more of a principle or pattern than it is a process. It’s not at all surprising that it would take different forms, and I would completely encourage that way of thinking.

Lean Production is different from either Mass Production or Craft Production. You can apply a kanban system to a mass production system like the SDLC in order to move it in the lean direction. You can also apply a kanban system to a craft production system like Scrum in order to move it in the lean direction. From two very different starting points, you can end up at the same outcome.

A few years back, I was developing a training program on Lean and Theory of Constraints concepts for product development. We started managing the project using Scrum, but the subject matter practically begged us to evolve the process in a leaner direction.

There were a number of things that were bothering me about Scrum:
  • I wanted to change the backlog more often than the timebox allowed
  • At any given moment, only one item in the backlog needs to be prioritized. Further prioritization is waste.
  • I wanted a specific mechanism to limit multitasking
  • I hated estimating
  • I hated negotiating Sprint goals
  • Sprint planning implicitly encourages people to precommit to work assignments
  • The ScrumMaster role is prone to abuse and/or waste
  • Burndowns reek of Management by Objective
  • Preposterous terminology

The thing that I wanted most was the smoothest possible flow of pending work into deployment, and Scrum just didn’t give me that. So, I proposed a simple spreadsheet-based method:

  • A daily standup
  • A single (roughly) prioritized backlog
  • Each person on the team is responsible for exactly two work items by the end of any standup
  • Every work item is associated with a workflow, and work item status is indicated by workflow state
  • A work item requires some kind of peer review and approval in order to be marked complete
  • New items can be added to the backlog at any time
  • There is a regular project review
  • The backlog must be regularly (but minimally) re-sorted
  • Status reporting is by cumulative flow only

One inspiration was a bit of folklore that programmer productivity peaks at 2 concurrent tasks. One task should be a high-priority primary task, and the other task should be a lower-priority task that you can work on when the first task is blocked.  Another inspiration was the Cumulative Flow Diagram (CFD). We had been applying the CFD to Sprints as an alternative to the burndown, which makes it perfectly obvious what Scrum’s limitations are with respect to flow. Limiting multitasking and deferring the binding of people to tasks could give us at least one smooth stripe on the diagram. The more you learn to use the CFD, the more useless the burndown chart seems. For all of the Scrum talk about feedback, the CFD is simply a better indicator.

I’ve learned some things since then that enhance the original design. My notion of managing WIP was more of a CONWIP style like Arlo Belshee’s Naked Planning than the workflow style we use on the big boards. I required a defined workflow, but not the division of labor that would make internal inventory buffers necessary.  The Boehm priority scheme might blow out the lead time, so I might do 1 small item + 1 larger item, or perhaps make separate service classes. I might also add a constraint indicator to the workflow.

The revised spreadsheet might look something like this:

Comments (9)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Tool for Creating a Bug Crushing Culture

Corey laid out an excellent introduction to the statistical magic of capture-recapture analysis to estimate the effectiveness of an inspection or review of code, documents, or other material.

But how can you practically get your teams to leverage these powerful feedback loops, leading them to conduct better and better reviews over time?

One thing that helps is a tool to get you started.

Here is a newly created, simple spreadsheet template that you can apply to any group document or code review.

Want to learn more? Get the template for this sheet and see how it works at http://leansoftwareengineering.com/capture-recapture-inspection/

Comments (0)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Pugh Decision Matrix

Have you …

  • Had an important decision for which you’ve waffled between several viable choices?
  • Had a decision that split your team into camps with no consensus and poor buy-in?
  • Had a design decision or policy that kept being attacked or reconsidered, months or years down the road?
  • Been using Set Based Development — exploring several design alternatives, looking to pick the final choice for this version of the product at the “last responsible moment”?

A great decision making tool for this kind of situation is a Pugh Decision Matrix, with the technique often called Pugh Concept Selection. “Pugh” comes from its originator, Stuart Pugh.

Here’s an example of a spreadsheet, applying our variant of the technique. I was looking at alternatives for buying a cellphone here in the US. Based on what I’ve filled in so far, the Nokia 6682 with T-Mobile is the best choice.

So how does this work? The basic steps of the Pugh Concept Selection Process are

  1. Brainstorm alternatives, list them across columns of sheet. Make one alternative the “default” — often it’s the “do-nothing” or status quo choice. This choice is rated zero for all criteria.
  2. Brainstorm criteria and characteristics important to the customer. List them down rows of sheet.
  3. Begin filling in 1, 0, or -1 ratings in the main area of sheet, based on whether that alternative is better, equivalent, or worse than the status quo for that criteria.
  4. If some criteria are more important than others, adjust the weights. If some products are much better than others, adjust the rating weights in the main area of the sheet. Don’t go overboard with this.
  5. Look at what the spreadsheet tells you is the best choice. Do you and the group feel good about that decision? If so, you’re done.
  6. If not, look again at steps 1-5 — do you have a complete set of criteria, or was something important to the decision missed? Are the weights you’ve assigned close enough?

I’ve found this technique personally useful whenever a simple pro/con sheet didn’t cut it — and taught the technique to a few hundred people at a little company here in Redmond, WA. The response was often positive — “this is a great way to more methodically make a tough decision as a group, and leave behind a record of why we made it.”

Sound interesting? Jump to our Pugh Decision Matrix page to download templatized versions of this spreadsheet to try yourself. Thanks for any feedback you have!

Comments (3)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Close
E-mail It
Socialized through Gregarious 42