No such thing as a small change | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Depends why loading fails. If Q.pm parses OK and executes OK but returns false, then the use_ok test will fail but the other twenty tests will still pass. None of your tests cover the situation where Q.pm returns false because you never attempt to "use Q" or "require Q". Nothing compels you to put a BEGIN { ... } block around it, but as a matter of style (in both test suites and regular code) I tend to make sure all modules load at compile time unless I'm purposefully deferring the load for a specific reason.
No it doesn't. It checks that the returned object is of the same class as the name supplied, or a descendent class in an inheritance heirarchy. This still allows you to return Q::Win32 or Q::Nix objects depending on the current OS, provided that they both inherit from Q. To have a class method called "new" in Q, which returns something other than an object that "isa" Q would be bizarre and likely to confuse users. Bizarreness is worth testing against. Tests don't just have to catch bugs - they can catch bad ideas. But it can catch bug anyway. Imagine Q/Win32.pm contains: my @ISA = ('Q');Ooops! That should be our @ISA. This test catches that bug.
Notice none of my further tests touch the "n" method? Well, at least its existence is tested for. If for some reason during a refactor it got renamed, this test would fail, and remind me to update the documentation. I don't think any of your tests check the "n" method either. If you accidentally removed it during a refactor, end users might get a nasty surprise. A can_ok test is essentially a pledge that you're not going to remove a method, or not without some deliberate decision process. Use of a formalised testing framework can act as a contract - not necessarily in the legal sense - between the developer and the end users. It's a statement of intention: this is how my software is expected to work; if you're relying on behaviour that's not tested here, then you're on dangerous ground; if I deviate from this behaviour in future versions, it will only have been after careful consideration, and hopefully documentation of the change. ☆ ☆ ☆ Overall most of your complaints around Test::More seem to revolve around three core concerns:
Verbosity of output has never been as issue for me. The "prove" command (bundled with Perl since 5.8.x) gives you control over the granularity of result reporting: one line per test, one line per file, or just a summary for the whole test suite. Yes, you get more lines when a test fails, but as a general rule most of your tests should not be failing, and when they do, you typically want to be made aware of it as loudly as possible. The fact that test running continues after a failure I regard as a useful feature. Some test files are computationally expensive to run. If lots of calculations occur, then a minor test of limited importance fails, I still want to be able to see the results of the tests following it, so if there are any more failures I can fix them all before re-running the expensive test file. If a particular test is so vital that you think the test file should bailout when it fails, it's not especially difficult to add or BAIL_OUT($reason) to the end of the test.
Test::Most offers the facility to make all tests bail out on failure, but I've never really used Test::Most. One man's "forced to jump through hoops" is another man's "saved from writing repetitive code". new_ok saves me from writing:
Ultimately if I did ever feel like a particular set of tests wasn't a natural fit for Test::More, there would be nothing to stop me sticking a few non-TAP scripts into my distro's "t" directory, provided I didn't name them with a ".t" at the end. They can live in the same directory structure as my other tests; they just won't get run by "prove" or "make test", and won't be reported on by CPAN testers. It doesn't have be an either/or situation. In reply to Re^3: Testing methodology
by tobyink
|
|