Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

comment on

( [id://3333]=superdoc: print w/replies, xml ) Need Help??
When I test just that *.t file (e.g. with "prove"), I should see any existing tests pass and that one test I just wrote fail.

That's fine when you're writing a given piece of code. You know what you just added or changed and it's probably sitting in your editor at just the right line. With the minuscule granularity of some of the test scripts I've seen, a failing test probably translates to an error in a single line or even sub clause thereof.

But most of my interactions with Test::* (as someone who doesn't use them), are as maintenance programmer on newly built or installed modules running make test.

I've not just forgotten how the code is structured, and how the (arbitrarily named) test files relate to the structure of the code. I never knew either.

All I see is:

... [gobs of useless crap omitted] ... t\accessors.......ok 28/37# Looks like you failed 1 test of 37. t\accessors.......dubious Test returned status 1 (wstat 256, 0x100) DIED. FAILED test 14 Failed 1/37 tests, 97.30% okay ... [gobs more crap omitted] ...
  • Something failed.
  • 28 from 37 == 9. But it says: "# Looks like you failed 1 test of 37."
  • Then it usefully says: "dubious". No shit Sherlock.
  • Then "Test returned status 1 (wstat 256, 0x100)". Does that mean anything? To anyone?
  • Then (finally) something useful: "DIED. FAILED test 14".

Now all I gotta do is:

  1. work out which test--and they're frequently far from easy to count--is test 14.
  2. Where in accessors.t it is located.
  3. Then work out what API(s) that code is calling.
  4. And what parameters it is passing.

    Which if they are constants is fine, but if the test writer has been clever efficient and structured a set of similar tests to loop over some data structure, I've a problem.

    I can't easily drop into the debugger, or even add a few trace statements to display the parameters as the tests loop, cos that output would get thrown away.

And at this point, all I've done is find the test that failed. Not the code.

The API that is called may be directly responsible, but I still don't know for sure what file it is in?

And it can easily be that the called API is calling some other API(s) internally.

  • And they can be located elsewhere in the same file.
  • Or in another file used by that file and called directly.
  • Or in another file called through 1 or more levels of inheritance.

And maybe the test writer has added informational messages that identify specific tests. And maybe they haven't. If they have, they may be unique, hard coded constants. Or they could be runtime generated in the test file and so unsearchable.

And even if they are searchable, they are a piss poor substitute for a simple bloody line number. And they required additional discipline and effort on the behalf of someone who I've never met and does not work for my organisation.

Line numbers and traceback are free, self maintaining, always available, and unique.

If tests are located near the code they test, when the code fails to compile or fails an assertion, the information takes me directly to the point of failure. All the numbering of tests, and labelling of tests, is just a very poor substitute and costly extra effort to write and maintain--if either or both is actually done at all.

That said, the cost of writing a line of test for trivial code is pretty minimal, so I often think it's worth it since what starts out trivial (e.g. addition) sometimes blossoms over time into function calls, edge cases, etc. and having the test for the prior behavior is like a safety net in case the trivial becomes more complex.

This is a prime example of the former of the two greatest evils in software development today: What-if pessimism and Wouldn't-it-be-nice-if optimism.. Writing extra code (especially non-production code) now, "in case it might become useful later" costs in both up-front costs and ongoing maintenance.

And sods law (as well as my own personal experience), suggests that the code you think to write now "just in case" is never actually used. Though inevitably the piece that you didn't think to write, is needed.

Some code needs to cover all the bases, consider every possible contingency. If your code is zooming around a few million miles away at the other end of a low-speed data-link burned into eprom. Then, belt & braces--or even three belts, two sets of braces and a reinforced safety harness--may be legitimate. But very few of us, and a very small amount of the world's code base live in such environments.

For the most part, the simplest way to improve the ROI of software, is to write less of it! And target what you must write, in those areas where it does most good.

Speculative, defensive, non-production crutches to future possibilities will rarely if ever be exercised, and almost never produce a ROI. And code that doesn't contribute to the ROI is not just wasted capital investment, but an ongoing drain on maintenance budgets and maintenance team mind-space.

Far better to expend your time making it easy to locate and correct real bugs that crop up during systems integration and beta testing, than trying to predict future developments and failure modes. And the single best contribution a test writer can make to that goal is to get the maintenance programmer as close to the source of the failure, when it occurs, as quickly as possible.

Yes. It is possible to speculate about what erroneous parameters might be fed to an API at some point in the future. And it is possible to write a test to pass those values to the code and ensure that the embedded range checks will identify them. But it is also possible to speculate about the earth being destroyed by a meteorite. How are you going to test that? And what would you do if you successfully detected it?

And yes, those are rhetorical questions about an extreme scenario, and there is line somewhere between what is a reasonable speculative possibility and the extremely unlikely. But that line is far lower down the scale than most people think.

Finally, it is a proven and incontestable fact that the single, simplest, cheapest, most effective way to avoid bugs is to write less code. And tests are code. Testing is a science and tests should be designed, not hacked together as after-thoughts (or pre-thoughts).

Better to have 10 well designed and targetted tests, than 100 overlapping, redundant, what-ifs.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

In reply to Re^3: Does anybody write tests first? by BrowserUk
in thread Does anybody write tests first? by amarquis

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":



  • Are you posting in the right place? Check out Where do I post X? to know for sure.
  • Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
    <code> <a> <b> <big> <blockquote> <br /> <dd> <dl> <dt> <em> <font> <h1> <h2> <h3> <h4> <h5> <h6> <hr /> <i> <li> <nbsp> <ol> <p> <small> <strike> <strong> <sub> <sup> <table> <td> <th> <tr> <tt> <u> <ul>
  • Snippets of code should be wrapped in <code> tags not <pre> tags. In fact, <pre> tags should generally be avoided. If they must be used, extreme care should be taken to ensure that their contents do not have long lines (<70 chars), in order to prevent horizontal scrolling (and possible janitor intervention).
  • Want more info? How to link or How to display code and escape characters are good places to start.
Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (4)
As of 2024-04-19 14:09 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found