|Think about Loose Coupling|
Re^3: OT: Agile programming; velocity during testing?by dws (Chancellor)
|on Mar 13, 2005 at 18:49 UTC||Need Help??|
Look, if you're getting "figure out why the output file is corrupt" problems during acceptance testing, and you can't give reasonably bounded estimates on resolving those problems, then you have bigger problems than just figuring out how to do the time accounting, whether you're trying to keep your velocity high or not.
But since you're focused on velocity, let's stay there, but let's back up to the purpose of calculating velocity, so that others reading along are on the same page. The primary purpose of a "velocity" number, which is how much planned work per unit time a team is actually accomplishing, is to feed forward into planning the next iteration (i.e., "we're doing N day iterations, and we get V work done per day, so let's limit the plan for the next iteration to N*V amount of estimated work"). There are secondary purposes, but the primary purpose is for planning.
During the iteration, some tasks might run long, in which case you defer the lowest priority tasks. Or, if you come in under estimate, you might take on some additional tasks. At the end of the iteration, you recalculate your velocity and feed it into the next round of planning.
So how do you handle bugs? I'm with a team that's been doing XP in Perl for several years. Here's what we do: If the bug is caught quickly, we consider the task that's associated with to be unfinished, and we fix the bug and adjust our actual time-on-task. If the bug is caught later (say, during a functional testing period), then we make a new task card for the bug, estimate it, and feed it into the planning process. Often, that planning happens mid-iteration; our customer might decide it's worth fixing now, even if that means deferring other work. (That's no different than having a task run long without it injecting bugs.) In either case, the work that goes into fixing the bug gets fed into our velocity calculation.
So how do you estimate a bug fix? Well, how do you estimate anything? You take your best guess. We handle risk by also giving a risk estimate along with a task (e.g., "I think that'll take 4 hours, but it's high risk."). We use risk to calculate a "risk buffer" during planning, since a pile of low-risk cards estimated at 10 days isn't the same as a pile of high-risk cards estimated at 10 days, and pretending they are for planning purposes is crazy.
So how do you handle a task that you don't have a clue about? We break off a separate "investigation" task (e.g., "I don't know how long it'll take me to do X, but after 1 day of poking around and prototyping, I should be able to give you a better answer.") The outcome of an investigation task is a set of estimates tasks, which the customer then gets to prioritize or defer. Or, if you're lucky, the result of an investigation task might completed work. Sometimes, we get lucky.
How you account for what happens during an acceptance testing phase depends on your team's role in that phase. If you have a separate QA organization running testing in parallel with the development team's iteration, treat bugs as story cards. (Whether you make them automatically high priority is a business decision.) If the development team stops developing to help out with acceptance testing (treating testing activities as estimatable, trackable tasks), then maybe you want to calculate a separate velocity for that period.
But again, if you're finding serious problems during functional testing and you can't provide bounded estimates for fixing them, then chances are really good that you have upstream problems in your development process.