project

Software development is like golf …

18th_hole_pic

Software development is like golf in that your chosen path to the pin is a primary problem.

The choice involves weighing your capabilities (people), the course (technology), and the conditions (market).

Confident you can do anything?  Bring out the big guns and send it over that treeline on the left. If you make it, you must land it among that sea of traps. This is the most common choice in software development, and the #1 reason why we so frequently turn “par 5″s into 10, 15, and 20s.

Don’t have a team of Tigers?  Then go long down the center if you can avoid that large sand trip on the left and lake beyond it. But is this abstract little map accurate enough?  Does it tell us about the 50 yards of soggy turf there in the middle?  How about wind howling left to right over the lake?  Unless you’ve played this hole before you don’t know.  What makes software development so tricky is by definition we never play the same hole twice.  Technology is changing around us, even while we make our own changes.

Want to make sure you don’t turn a par 5 into a 10?  Land it short of the traps on the fairway and take it from there with comfortable strokes. There’s a price to pay for this caution: no holes in one for you.  You’re less likely to be a hero this way, but in team-scale software development, the hero model will fail you are sure as a week in Vegas, anyway.

And, sometimes, the conditions are particularly bad.  In the downturn of 2008/2009, one could say the conditions for technology products are analogous to “a raging hurricane, with chance of nearby volcanic eruption and ash over the course.”  So take care. Be humble.  And may you keep both your work in progress and your golf scores down.

(apologies to all software developers offended by an anology from the “sport of salespeople and CEOs”.  Think of it as an analogy to bridge the divide)

Comments (6)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Accounting for opportunity and cost

A universal challenge for technology companies is having more demands to improve the product than we can implement.  This problem gets bigger as we get more successful.

Because our capabilities and those demands shift over time — if we want to maximize the value of what we deliver to the customer, then we want to tackle things in a prioritized fashion, knowing we simply won’t get to it all in any one release cycle.

And this prioritized breakdown better supports moving to a lean/ kanban workflow.

How to best analyze and prioritize all these requests, then? We want to be able to do it systematically, regularly, and have these priorities reflect (the best we can) the sweet spot of both the market value of an idea, and the engineering cost to create it1.

This last part is difficult, but especially important.

Say we have a few alternative design ideas for satisfying a need. Because it may appear “anything is possible” in software, we may be tempted to try to deliver exactly “what the customer wants.” Let’s say that’s design A.  Unfortunately, developing design A could be a bad choice, because it could easily take many times the effort of a very similar and nearly as appealing choice B, just because it asks for some functionality that doesn’t neatly fit into our existing system and available components around us.  A fundamental principle of software engineering is that even a small shift in requirements can cause a huge shift in cost and/or risk of development.

It’s similar to choosing to fit within the limitations of available off-the-shelf parts for a home remodel — it is obviously much cheaper than buying custom.

But “off the shelf” is an unfortunately non-obvious, slippery concept in the shifting world of software.  To reuse something, whether it’s your own code or someone else’s, there’s often a chain of “if”s you have to walk down: “We can save 2 months of effort by using this nice library IF we assume customers want Windows .NET only; IF we use C#; IF we take a dependency on .NET Framework 3.5 SP1 to get the libraries we need; IF we only do features A, B, D, and E, and punt on C, because it would require we rewrite and replace the library…” and every such path is usually followed by a “BUT, if we do that, then we are locked into not doing …”.

In most cases, developers on your projects won’t know all the alternatives and consequences until they’ve delved deeply into the design and usually into the code.

This is one of many reasons why two strategies are so important in getting (re)prioritization right for software engineering:

  1. Include both marketing and engineering equally in the decisions of what features to prioritize.  Systemically account for both world views, while coming to one decision.
  2. Plan to adjust your feature set over the course of the project, as your team learns new things.  By allowing for small adjustments and sacrifices in the requirements, you can dramatically lower the project cost and risk.

#2 is common advice for a host of reasons.  #1 can be achieved with the right people, and a structured decision making method like perpetual multivoting, or a Delphi decision making method that will be described in a future post.

When the right people are paired with the right process in this way, the spigot of innovation will open wider, benefiting both you and your customers.

Notes

(1) We actually want the sweet spot of the triumverate of Voice of the Customer (VOC),  Voice of Technology (VOT), and Voice of the Business (VOB) — so we can prioritze the work that will deliver the most value to our customer, with the lowest effort and risk, with the best financial outcome for our company.  In many companies, we break these perspecives down (for better or worse) into specializations: sales represents the pure VOC, marketing VOB, and engineering VOT.  So the challenge is — finding a sweet spot means getting perspectives from several people, often with very different backgrounds and communication styles, who wouldn’t normally be caught dead getting drinks at the same pub with each other …

Comments (2)

Print This Post Print This Post

Email This Post Email This Post

Permalink

My favorite kanban development example

It has been a long time in the works, but Clinton Keith’s article on kanban systems for game development is finally up. It is an excellent description of the why and the how of organizing a process for content-intensive development. Clint has managed to do something which I previously thought improbable: setting a takt time pace for development activities.

Comments (1)

Print This Post Print This Post

Email This Post Email This Post

Permalink

The customer doesn’t want a release every month

This is part 4 of 10 Pitfalls of Agility on Large Projects. In part 3, we talked about how we can’t afford to trust everyone on large teams, and what to do about it.

Frequent releases, which are central to both agile and lean methods, sometimes draw a visceral reaction because people fear we can’t keep the product that close to release quality at all times, while still delivering as much innovation.

That is a real challenge of agile/lean methods. But making this argument against short cycles is difficult, so the most common argument ends up being “the customer doesn’t want a release every month (or every 6 months, or every year …) anyway.”

Unfortunately on any project with many customers, while no customer wants a release every month, there is always one wanting a release right now.

We can deal with that to some extent by having tiers of releases: the single-customer quick fix, the multi-customer patch, the service release, and maybe minor and major releases. But each of these tiers involves extraordinary cost and waste from duplicate efforts — waste that can often be avoided if the main release cycle is short enough that customers can get their needs met.

It’s better to embrace the challenge and compelling benefits of feedback on short cycles. Here’s a rough diagram of a typical three-tier set of daily, weekly, and monthly release cycles.

Release Early and Often

The heart of it is a per-change or, at worst, a daily build (continuous integration). This build is picked up only by people who are in close contact with the project. This discipline keeps everyone in sync, and keeps the project from wandering into a broken state.

The weekly build (or some equivalent) is important whether at a large organization (where many remote teams share dependencies) or at a smaller startup (where the salespeople and management interact with the product daily). Tightening up the feedback loop of these internal customer proxies creates transparency, builds trust, and keeps the product from evolving off-track feature-wise.

Releasing something to customers every month can seem daunting in some large organizations. The biggest of the software beasts (Microsoft) has had challenges but also great benefits in doing so. And if you think you have a great plan set in stone (a very detailed set of requirements), all the feedback these releases will generate would seem to be just a source of endless distraction.

Don’t be foolish. Unless you’re delivering a 1-1 functional replacement for some existing product (and how often do we do that?), you desperately need that feedback. It keeps the team connected with the customer, motivated by the customer, doing right by the customer. Concerned about opening your kimono to competitors? Limit your audience to a trusted subset, like Apple’s AppleSeed program and most others do.

Once the team gets used to a cadence of regular releases, the practice becomes the heartbeat of the organization, keeping everyone on track, in touch with reality, and constantly learning.

Comments (2)

Print This Post Print This Post

Email This Post Email This Post

Permalink

We can’t afford to trust everyone on larger teams

This is part 3 of 10 Pitfalls of Agility on Large Projects. In part 2, we talked about how effective small teams need coordination to make an effective large organization, and what to do about it.

When cycles get shorter and our teams become more empowered to react to change, one fear is that accountability will get lost in all those changes.

Specifically, when something goes wrong, what is the root cause? Are my people failing? our processes failing? our plans unrealistic? How can management learn how to head off these failures in the future, if we aren’t making commitments and measuring progress against them?

Step one is to take some of this pressure off — in fact, to embrace failure as a path to learning, and taking a statistical approach to making the most of it.

Like a pharmaceutical firm screening new compounds, or venture capitalist building a portfolio of investments — a manager in a high-risk domain needs strategies to spread the bets to maximize opportunity while minimizing overall risk. This is basically Set-Based Design (Toyota’s Set-Based Concurrent Engineering).

Software development, in particular, is a research and development activity suited more to this approach, and much less of a traditional production activity.

Step two is creating an environment with much more transparency — it’s not about trusting people to execute to plan, rather it’s about trusting people to be transparent about the true progress and status of the project, so that everyone can adjust accordingly.

One mechanism is a public kanban board like the kind Corey has been exploring on this blog (from work with David Anderson at Corbis). By watching the board over time, throughput of the team and bottlenecks within the team become clear.

An electronic version is important if there are people (dependent teams, remote teams, or upper management) that can’t huddle around the board.

 

Cumulative Flow DiagramA Cumulative Flow Diagram is an electronic alternative that can be a great way to both summarize the flow of value, gain visibility into the history, and see important management events (like estimation troubles, or WIP getting out of hand, or new priorities causing the plan to grow out of control). This CFD is again from David Anderson, as described in his 2004 BorCon presentation.

In a future post, we’ll look at a way to create and manage by CFDs, even when each kanban (or feature) is not similarly sized — this involves also collecting time data (which then has other uses).

And, of course, if rework and bugs are tracked separately (as they usually are today), then traditional bug charts are critical to management.

Should a small team of 4-6 developers with a strong process like Extreme Programming layer this kind of data collection on to their process? Probably not. But as teams grow to 15, 50, or beyond — this kind of transparency is critical glue to keep management and teams coordinated, even while allowing each individual and each small group to work on short cycles and be highly responsive to change.

On your projects, have you seen other forms of data collection that are both lightweight, keep everyone in touch with reality, and help build trust?

Comments (6)

Print This Post Print This Post

Email This Post Email This Post

Permalink

Close
E-mail It
Socialized through Gregarious 42