Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

OT: Agile programming; velocity during testing?

by Whitehawke (Pilgrim)
on Mar 12, 2005 at 15:20 UTC ( [id://438924]=perlquestion: print w/replies, xml ) Need Help??

Whitehawke has asked for the wisdom of the Perl Monks concerning the following question:

Below is a question about Agile Development. Let me spend just a moment summarizing my understanding of AD so that we're all on the same page. Below is the basic summary of an Agile project (specifically Extreme Programming). It's a little bit oversimplified in order to keep it short, but should be essentially accurate.

  • Very little, if any, upfront design.
  • User stories are tracked on cards, which decouples them from any implicit ordering.
  • The work is divided into "iterations" of a fixed time length, often two weeks.
  • During each iteration, the customer chooses which stories will be worked on. If a problem comes up or a bug needs to be fixed, and this is going delay things, the customer chooses which stories get bumped.
  • The team tracks their "velocity", which is a measure of how much they got done in each iteration. This number tends to stabilize pretty quickly, so it becomes useful for estimating how long the rest of the work will take.
  • TDD (Test Driven Development): Unit tests are written before code. This is low-level testing that verifies a single subroutine or component.
  • Once all stories are complete and all unit tests pass, you enter the acceptance testing (AT) phase. This is high-level testing that verifies the product as a whole.
  • Once the ATs pass, the product ships.

I've been doing Agile development for several years now. One problem that I've never gotten a completely satsifactory answer to: velocity tracking works great during development, but starts to break down once you get into the acceptance testing phase--it's hard to write stories for invidivual bugs until you know they exist (which means that the amount of stories in the phase can keep growing) and it's very hard to estimate how long it will take to fix a particular bug.

I've come up with some ad hoc ways of dealing with this, but they don't work particularly well. Are there any Monks, smarter and/or more enlightened than I, that have suggestions on ways that work for them?

  • Comment on OT: Agile programming; velocity during testing?

Replies are listed 'Best First'.
Re: OT: Agile programming; velocity during testing?
by eieio (Pilgrim) on Mar 12, 2005 at 17:45 UTC
    Not all Agile Software Development Methodologies employ velocity tracking as a technique; it is most often associated with Extreme Programming (XP). Regardless, if you are developing your software in increments or iterations, there should be some testing during each increment or iteration. Therefore, when you use the measured work from a previous increment or iteration to predict the next, some amount of effort spent finding and fixing bugs will be incorporated.

    However, if for whatever reason you find yourself with a substantial acceptance testing phase at the end of your lifecycle, you could try to apply the technique of velocity tracking through historical comparisons. If you have historically worked on similar projects with similar teams (a dangerous assumption), you could predict the effort for an upcoming acceptance testing phase based on the effort exerted in these previous acceptance testing phases.

    Another approach is to chart bugs versus time and bugs versus testing effort. While this doesn't help you estimate how long the acceptance testing phase will take, it may help determine when you are approaching the end. The "bug curve" will "roll off" as most of the critical bugs have been found. This technique is most useful for larger projects where there are enough developers and bugs to form a statistically meaningful sample.

    While the two above techniques may be helpful, I think the best approach is to incorporate more testing into each increment and minimize, if not eliminate, a dedicated acceptance testing phase at the end of your software development lifecycle.

Re: OT: Agile programming; velocity during testing?
by thor (Priest) on Mar 12, 2005 at 17:42 UTC
    Once a bug is brought to light (i.e. the customer is told about it), they make the determination whether or not the bug is important enough to bump a feature for a given iteration. If it is, then they put it on the plate for that iteration and you estimate it just like anything else. Am I missing something?

    thor

    Feel the white light, the light within
    Be your own disciple, fan the sparks of will
    For all of us waiting, your kingdom will come

      Hi thor, Well, you and I may have different understandings of how AD works...I added an update to explictly state mine. In my understanding, once acceptance testing starts, you are no longer implementing features.
        Acceptance testing should be happening after each iteration. After all, how can you know if you can collect the points associated with a story if the customer doesn't say it's done? And, how can the customer say that it's done without acceptance tests to back up their claim?

        thor

        Feel the white light, the light within
        Be your own disciple, fan the sparks of will
        For all of us waiting, your kingdom will come

Re: OT: Agile programming; velocity during testing?
by dws (Chancellor) on Mar 13, 2005 at 05:11 UTC

    ... but starts to break down once you get into the acceptance testing phase--it's hard to write stories for invidivual bugs until you know they exist (which means that the amount of stories in the phase can keep growing) and it's very hard to estimate how long it will take to fix a particular bug.

    If you're finding a lot of problems during acceptance testing, you've got bigger problems than velocity.

    What kind of problems are you seeing?

    • "That isn't what I wanted!" problems mean that you aren't communicating with your customer early and often enough.

    • "It doesn't work!" problems mean that you're either slipping on Test-Driven Development, and aren't getting adequate test coverage, or you're giving in to time pressure and letting quality slip rather than deferring stories into later iterations.

    • If you're finding it "very" hard to estimate how long it'll take to fix a bug that's slipped through into acceptance testing, then you probably have a large design or refactoring debt that's cluttering up your system. Are you taking the time to refactor?

    Problems with velocity during testing are symptoms of upstream problems. Identify and fix the upstream problems, and your downstream velocity problems will take care of themselves.

      While I appreciate your suggestion, I think you may have missed my point. I'm not saying "I'm having a problem keeping my velocity high", which is what I think you are hearing. Instead, I'm saying "the whole concept of velocity tracking doesn't apply well during the acceptance testing phase."

      Useful velocity tracking is based on two things: a constant time period (one iteration), and an estimated number of ideal days of work to do in that time. Say you accomplish 6 ideals in the iteration; your velocity is 6 (*).

      So, what happens when you need to estimate the following story: "Figure out why the output file is corrupt."

      It's (comparatively) easy to estimate how long it will take to construct something. But how do you estimate how long it will take to have a flash of insight?

      --Whitehawke

      * As an aside, I prefer to divide the ideals by calendar days in the iteration, so I would say that 6 ideals completed in a 2 week iteration is a velocity of (6 ideals accomplished/14 calendar days to do them == ) 0.43. Doing it this way automatically normalizes all iteration lengths against each other, making for easier historical comparisons.

        Look, if you're getting "figure out why the output file is corrupt" problems during acceptance testing, and you can't give reasonably bounded estimates on resolving those problems, then you have bigger problems than just figuring out how to do the time accounting, whether you're trying to keep your velocity high or not.

        But since you're focused on velocity, let's stay there, but let's back up to the purpose of calculating velocity, so that others reading along are on the same page. The primary purpose of a "velocity" number, which is how much planned work per unit time a team is actually accomplishing, is to feed forward into planning the next iteration (i.e., "we're doing N day iterations, and we get V work done per day, so let's limit the plan for the next iteration to N*V amount of estimated work"). There are secondary purposes, but the primary purpose is for planning.

        During the iteration, some tasks might run long, in which case you defer the lowest priority tasks. Or, if you come in under estimate, you might take on some additional tasks. At the end of the iteration, you recalculate your velocity and feed it into the next round of planning.

        So how do you handle bugs? I'm with a team that's been doing XP in Perl for several years. Here's what we do: If the bug is caught quickly, we consider the task that's associated with to be unfinished, and we fix the bug and adjust our actual time-on-task. If the bug is caught later (say, during a functional testing period), then we make a new task card for the bug, estimate it, and feed it into the planning process. Often, that planning happens mid-iteration; our customer might decide it's worth fixing now, even if that means deferring other work. (That's no different than having a task run long without it injecting bugs.) In either case, the work that goes into fixing the bug gets fed into our velocity calculation.

        So how do you estimate a bug fix? Well, how do you estimate anything? You take your best guess. We handle risk by also giving a risk estimate along with a task (e.g., "I think that'll take 4 hours, but it's high risk."). We use risk to calculate a "risk buffer" during planning, since a pile of low-risk cards estimated at 10 days isn't the same as a pile of high-risk cards estimated at 10 days, and pretending they are for planning purposes is crazy.

        So how do you handle a task that you don't have a clue about? We break off a separate "investigation" task (e.g., "I don't know how long it'll take me to do X, but after 1 day of poking around and prototyping, I should be able to give you a better answer.") The outcome of an investigation task is a set of estimates tasks, which the customer then gets to prioritize or defer. Or, if you're lucky, the result of an investigation task might completed work. Sometimes, we get lucky.

        How you account for what happens during an acceptance testing phase depends on your team's role in that phase. If you have a separate QA organization running testing in parallel with the development team's iteration, treat bugs as story cards. (Whether you make them automatically high priority is a business decision.) If the development team stops developing to help out with acceptance testing (treating testing activities as estimatable, trackable tasks), then maybe you want to calculate a separate velocity for that period.

        But again, if you're finding serious problems during functional testing and you can't provide bounded estimates for fixing them, then chances are really good that you have upstream problems in your development process.

Re: OT: Agile programming; velocity during testing?
by TedPride (Priest) on Mar 12, 2005 at 23:14 UTC
    It's impossible to predict the unknown. The best you can do is limit the number of bugs that get through, by rigorous testing at each phase of development. Every function is considered a potential bug until tested for all possible inputs (min, max, a random middle case if a range of inputs, all inputs if a finite non-sequential set). Every routine using an untested function is considered a potential bug until all functions it uses are tested and the routine itself is tested. Every section of the program using an untested routine is considered a potential bug, etc. etc. Testing takes time, but in a large project that's more than made up for by time saved not having to fix bugs that could be anywhere inside hundreds of thousands of lines of code.

    Basically, your best bet is to read up on how to properly test your program and prevent bugs. You limit disease through proper hygiene, not by worrying about how you're going to stock a cure for everything on the face of the planet. If a bug DOES make it through your testing procedures intact, estimate time based on the programming phase you're currently in (a bug in a routine is far easier to locate and fix than a bug somewhere in a "finished" program) and don't sweat specifics. Eventually you'll have enough bugs from each phase so you can fairly well average based on past performance how long it's going to take.

      Hmmm...what I'm hearing you say is that, during development, tests should be written not just for individual routines but for larger and larger blocks of routines.

      That sounds good.

      I do worry about the possibility of combinatorial explosion in the number of elements that will be tested...but that's probably a false concern, since you only need to worry about testing blocks of routines that are actually used together. Hmm.

      At the same time, I don't think you can do away with the acceptance testing phase. The point of that phase is not to reveal bugs...the point is to prove TO THE CUSTOMER that the product works as required.

      Thanks, this is definitely the kind of thing I was looking for.

Re: OT: Agile programming; velocity during testing?
by jhourcle (Prior) on Mar 12, 2005 at 23:18 UTC

    For those of us who don't have experience with Agile development, would you be willing to share your ad-hoc ways of dealing with the problem, and the failings that you've had with them?

      You want me to talk about myself and my ideas? Oh, the horror! :>

      The simplest way is to not do velocity tracking any more, and to fall back on gut-instinct and general experience. If you have the instinct and experience, this can work, especially if you communicate very clearly with your customer about how things stand. It certainly isn't satisfying, though, and it's not as comfortable because there is nothing backing up your instinct that you can get a reality check from.

      The next is to take all the bugs or weirdness that you know of, write stories for them, do your best to estimate these stories (these estimates are pretty much always wild guesses), and then continue. As you get more information, update the stories and the estimates. This is tough because the very nature of it works against you...once you know how to do the trick, it's easy. So, say you initially guess that "determine why the output file is corrupt" is 2 ideal days. You spend a calendar day investigating and determine that the fileheader is using the wrong kind of newlines; it takes you 10 seconds to fix the problem. Now that you've solved it, you know how long it took to solve and, clearly, your intial wild guess was too high. If you leave it as is, you are inflating your velocity but if you change it, what do you change it to?

      This last scenario is the problem that made me post here on PM. So far, I don't have a good answer to it...the least disagreeable answer has been to change the estimate to whatever will leave my velocity unchanged.

Re: OT: Agile programming; velocity during testing?
by jdporter (Paladin) on Mar 12, 2005 at 19:15 UTC
    Even though it's OT, it's still a subject that many monks might be interested in and would like to discuss; so posting it here (or in Meditations) is quite reasonable.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://438924]
Approved by Tanktalus
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (2)
As of 2024-04-26 00:37 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found