http://qs321.pair.com?node_id=439066


in reply to OT: Agile programming; velocity during testing?

... but starts to break down once you get into the acceptance testing phase--it's hard to write stories for invidivual bugs until you know they exist (which means that the amount of stories in the phase can keep growing) and it's very hard to estimate how long it will take to fix a particular bug.

If you're finding a lot of problems during acceptance testing, you've got bigger problems than velocity.

What kind of problems are you seeing?

Problems with velocity during testing are symptoms of upstream problems. Identify and fix the upstream problems, and your downstream velocity problems will take care of themselves.

  • Comment on Re: OT: Agile programming; velocity during testing?

Replies are listed 'Best First'.
Re^2: OT: Agile programming; velocity during testing?
by Whitehawke (Pilgrim) on Mar 13, 2005 at 15:45 UTC

    While I appreciate your suggestion, I think you may have missed my point. I'm not saying "I'm having a problem keeping my velocity high", which is what I think you are hearing. Instead, I'm saying "the whole concept of velocity tracking doesn't apply well during the acceptance testing phase."

    Useful velocity tracking is based on two things: a constant time period (one iteration), and an estimated number of ideal days of work to do in that time. Say you accomplish 6 ideals in the iteration; your velocity is 6 (*).

    So, what happens when you need to estimate the following story: "Figure out why the output file is corrupt."

    It's (comparatively) easy to estimate how long it will take to construct something. But how do you estimate how long it will take to have a flash of insight?

    --Whitehawke

    * As an aside, I prefer to divide the ideals by calendar days in the iteration, so I would say that 6 ideals completed in a 2 week iteration is a velocity of (6 ideals accomplished/14 calendar days to do them == ) 0.43. Doing it this way automatically normalizes all iteration lengths against each other, making for easier historical comparisons.

      Look, if you're getting "figure out why the output file is corrupt" problems during acceptance testing, and you can't give reasonably bounded estimates on resolving those problems, then you have bigger problems than just figuring out how to do the time accounting, whether you're trying to keep your velocity high or not.

      But since you're focused on velocity, let's stay there, but let's back up to the purpose of calculating velocity, so that others reading along are on the same page. The primary purpose of a "velocity" number, which is how much planned work per unit time a team is actually accomplishing, is to feed forward into planning the next iteration (i.e., "we're doing N day iterations, and we get V work done per day, so let's limit the plan for the next iteration to N*V amount of estimated work"). There are secondary purposes, but the primary purpose is for planning.

      During the iteration, some tasks might run long, in which case you defer the lowest priority tasks. Or, if you come in under estimate, you might take on some additional tasks. At the end of the iteration, you recalculate your velocity and feed it into the next round of planning.

      So how do you handle bugs? I'm with a team that's been doing XP in Perl for several years. Here's what we do: If the bug is caught quickly, we consider the task that's associated with to be unfinished, and we fix the bug and adjust our actual time-on-task. If the bug is caught later (say, during a functional testing period), then we make a new task card for the bug, estimate it, and feed it into the planning process. Often, that planning happens mid-iteration; our customer might decide it's worth fixing now, even if that means deferring other work. (That's no different than having a task run long without it injecting bugs.) In either case, the work that goes into fixing the bug gets fed into our velocity calculation.

      So how do you estimate a bug fix? Well, how do you estimate anything? You take your best guess. We handle risk by also giving a risk estimate along with a task (e.g., "I think that'll take 4 hours, but it's high risk."). We use risk to calculate a "risk buffer" during planning, since a pile of low-risk cards estimated at 10 days isn't the same as a pile of high-risk cards estimated at 10 days, and pretending they are for planning purposes is crazy.

      So how do you handle a task that you don't have a clue about? We break off a separate "investigation" task (e.g., "I don't know how long it'll take me to do X, but after 1 day of poking around and prototyping, I should be able to give you a better answer.") The outcome of an investigation task is a set of estimates tasks, which the customer then gets to prioritize or defer. Or, if you're lucky, the result of an investigation task might completed work. Sometimes, we get lucky.

      How you account for what happens during an acceptance testing phase depends on your team's role in that phase. If you have a separate QA organization running testing in parallel with the development team's iteration, treat bugs as story cards. (Whether you make them automatically high priority is a business decision.) If the development team stops developing to help out with acceptance testing (treating testing activities as estimatable, trackable tasks), then maybe you want to calculate a separate velocity for that period.

      But again, if you're finding serious problems during functional testing and you can't provide bounded estimates for fixing them, then chances are really good that you have upstream problems in your development process.