Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re: TDD with Coverage Analysis. Wow.

by xdg (Monsignor)
on Aug 27, 2005 at 11:13 UTC ( [id://487124]=note: print w/replies, xml ) Need Help??


in reply to TDD with Coverage Analysis. Wow.

Devel::Cover is pretty amazing. I had a similar revelation a year or so ago. Since then, one big thing I've learned is that it's important to keep in mind that 'coverage is not correctness' -- your coverage statistic is just a metric and getting too focused on it can be a distraction. Or put another way, it's a development tool, not a quality metric.

For example, (as you hinted) what kind of coverage does this get:

open( my $fh, ">", $output ) or die_with_error();

If you're a coverage junkie, then you might put yourself through all sorts of contortions to eliminate that red spot (e.g. creating an non-writeable directory for output). But why? You can test die_with_error() on it's own. You don't really need to test that your code can successfully fail an open call. On the other hand, if instead of dying, your code did some special handling, like retrying the write a few time before giving up, then going through those contortions might be appropriate. But that's human context that Devel::Cover can't give you.

Fortunately, Devel::Cover does tries to check for some types of "uncoverable" code. E.g.:

my $value = $some_other_value || undef;

It's smart enough to know that you'll never get undef to be true. But what about this:

my $filename = $some_filename || default_filename();

Sometimes, you can code around these things, but I don't think it's worth diminishing readabilty for coverage. Here's one way for the example above:

my $filename = $some_filename ? $some_filename : default_filename();

That's not bad, but what if the inital condition is a subroutine:

# original my $filename = prompt_for_filename() || default_filename(); # can't do this my $filename = prompt_for_filename() ? prompt_for_filename() : default_filename(); # coverage-happy version my $prompted_filename = prompt_for_filename(); my $filename = $prompted_filename ? $prompted_filename : default_filename();

perl-qa had interesting discussions about this kind of stuff. A good one to read is testing || for a default value. There, some people are advocating for some way to flag lines as uncoverable with comments or an external file, to "make the red go away" once they've checked a line and are convinced it's really not coverable.

Other things that have popped up "red" for me along these lines:

  • OS-dependent stuff (as you said)
  • perl version or perl config specific code
  • throwing in a wantarray for the future when I haven't used it that way yet (thought I either ought to follow YAGNI or actually test this -- but it cropped up when I was emulating caller)
  • 'switch' type code with a default that shouldn't ever be reached

So, my advice is use it as a tool to reveal where you thought you had written tests to cover something but hadn't. But don't let coverage become the end goal for its own sake.

-xdg

Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Replies are listed 'Best First'.
Re^2: TDD with Coverage Analysis. Wow.
by dws (Chancellor) on Aug 27, 2005 at 15:42 UTC
    For example, ... what kind of coverage does this get:
    open( my $fh, ">", $output ) or die_with_error();
    If you're a coverage junkie, then you might put yourself through all sorts of contortions to eliminate that red spot ... But why?

    One reason to make sure that the error branch is tested is for documentation. You're showing the (test) reader how the method under test behaves when it encounters that error. Having tested die_with_error() isn't sufficient. That leaves the reader in the position of having to read both the test, the code being tested, and whatever methods or subs that code invokes.

    It might also be possible that in the context of the line of code above, invoking die_with_error() might be the wrong thing to do. And, admit it, that "or" branch might never get invoked during testing unless you force the issue.

    Besides, the contortions here are minor. If $output is an argument to the method you're testing, injecting a bogus file path is trivial. And if doing that involves too many other side effects, it's a hint that extracting that line (and possibly some others around it that are involved in setting up for output) into a separate, more easily testable method might simplify the code.

      I think this is a good example of the gray zone. You can do various contortions, but what are you really proving in doing so? That open can return a false value?

      The reasons you give may well be valid from a particular point of view (and I'm largely sympathetic to them) -- but they are really unrelated to coverage. One should force the failure and test the result if these other things are important, not because one is aiming for 100% coverage.

      To reinforce the point another way: one can improve the coverage metric just by removing the "or die" phrase, and letting the program blow up on its own should an error ever actually occur. This makes the program less robust and at least arguable lower quality -- but the coverage metric goes up. So coverage does not equal quality.

      If there's a requirement to fail an error a certain way, then by all means, write the test and generate the error -- but then one is generating the error to show that the requirement is satisfied, not to meet a coverage goal for its own sake.

      -xdg

      Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

        If there's a requirement to fail an error a certain way, then by all means, write the test and generate the error -- but then one is generating the error to show that the requirement is satisfied, not to meet a coverage goal for its own sake.

        Lacking a requirement to fail a certain way, a lot of people, myself among them, will often toss in an or die and be done with it, without ever testing that failure case to see how it behaves functionally. And, for many customers, "fail gracefully" is an implicit requirement. Coverage analysis points out where we've taken half-steps, and suggests where a few more unit (or functional) tests might be needed.

        It's not about getting to 100%, though that does become tempting when being handed a color-coded chart. It's about adequate test coverage.

        The reasons you give may well be valid from a particular point of view (and I'm largely sympathetic to them) -- but they are really unrelated to coverage. One should force the failure and test the result if these other things are important, not because one is aiming for 100% coverage.

        Oh yes. Coverage is a tool to help you to create good test suites. Not the other way around.

Re^2: TDD with Coverage Analysis. Wow.
by dragonchild (Archbishop) on Aug 29, 2005 at 02:32 UTC
    open( my $fh, ">", $output ) or die_with_error();

    Mock open()? That's what I would do ...


    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://487124]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others lurking in the Monastery: (9)
As of 2024-04-18 12:54 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found