http://qs321.pair.com?node_id=487156


in reply to Re: TDD with Coverage Analysis. Wow.
in thread TDD with Coverage Analysis. Wow.

For example, ... what kind of coverage does this get:
open( my $fh, ">", $output ) or die_with_error();
If you're a coverage junkie, then you might put yourself through all sorts of contortions to eliminate that red spot ... But why?

One reason to make sure that the error branch is tested is for documentation. You're showing the (test) reader how the method under test behaves when it encounters that error. Having tested die_with_error() isn't sufficient. That leaves the reader in the position of having to read both the test, the code being tested, and whatever methods or subs that code invokes.

It might also be possible that in the context of the line of code above, invoking die_with_error() might be the wrong thing to do. And, admit it, that "or" branch might never get invoked during testing unless you force the issue.

Besides, the contortions here are minor. If $output is an argument to the method you're testing, injecting a bogus file path is trivial. And if doing that involves too many other side effects, it's a hint that extracting that line (and possibly some others around it that are involved in setting up for output) into a separate, more easily testable method might simplify the code.

Replies are listed 'Best First'.
Re^3: TDD with Coverage Analysis. Wow.
by xdg (Monsignor) on Aug 28, 2005 at 06:07 UTC

    I think this is a good example of the gray zone. You can do various contortions, but what are you really proving in doing so? That open can return a false value?

    The reasons you give may well be valid from a particular point of view (and I'm largely sympathetic to them) -- but they are really unrelated to coverage. One should force the failure and test the result if these other things are important, not because one is aiming for 100% coverage.

    To reinforce the point another way: one can improve the coverage metric just by removing the "or die" phrase, and letting the program blow up on its own should an error ever actually occur. This makes the program less robust and at least arguable lower quality -- but the coverage metric goes up. So coverage does not equal quality.

    If there's a requirement to fail an error a certain way, then by all means, write the test and generate the error -- but then one is generating the error to show that the requirement is satisfied, not to meet a coverage goal for its own sake.

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

      If there's a requirement to fail an error a certain way, then by all means, write the test and generate the error -- but then one is generating the error to show that the requirement is satisfied, not to meet a coverage goal for its own sake.

      Lacking a requirement to fail a certain way, a lot of people, myself among them, will often toss in an or die and be done with it, without ever testing that failure case to see how it behaves functionally. And, for many customers, "fail gracefully" is an implicit requirement. Coverage analysis points out where we've taken half-steps, and suggests where a few more unit (or functional) tests might be needed.

      It's not about getting to 100%, though that does become tempting when being handed a color-coded chart. It's about adequate test coverage.

      The reasons you give may well be valid from a particular point of view (and I'm largely sympathetic to them) -- but they are really unrelated to coverage. One should force the failure and test the result if these other things are important, not because one is aiming for 100% coverage.

      Oh yes. Coverage is a tool to help you to create good test suites. Not the other way around.