At my work we had a very loose testing set-up which mostly consisted of manual results checking of test program output and extensive trace/debug messages (to follow program flow). It has worked for us for a while now, but has never been efficient by any stretch of the imagination. There was however a sense of security (probably a false one really) that our eyes had poured over this and we were all sure the programs worked as advertised. But recently I have discovered Test::More while preparing my first CPAN module, and I have to say it was love at first sight.
My first module on CPAN had 37 tests, the second 448. I have also recently started to convert our in-house (eventually to be released) OO-framework over to Test::More style testing, and I currently have 2452 tests (spread over 91 files, the framework itself is approx. 150 modules), and thats only testing the interfaces, not implementations yet.
My question/point-of-meditation for the group is;
- How much testing is too much?
... and more specifically
- Should you ever assume anything in your tests?
- Are redudant tests evil? Even if they don't cause extra work on the test writers part (just happens as a result of using subs in your test)?
Here is a list of some of the things I have been doing, which to me make sense, but I wonder if I am just gone Testing-slap-happy.
- I test every method before its called with can_ok, and I (re)test the same method on each instance I create. Sometimes I actually set up test_Object_Interface functions and use it to test subclasses (which is the source of many of the duplicate tests).
- I test all exceptions that can be thrown (with Test::Exception) by feeding the code false parameters and checking what I get.
- I test all my constants using can_ok and then test their value.
I have collected some other good Testing related nodes, links, etc. as well (and I am sure there are more), all of which provided me much insight of late as to how to test. But none seemed to suggest how much to test (and when it was too much).