xdg wrote
I think I just don't "get" Test::LectroTest.
Maybe this will help.
I wrote LectroTest because I wanted an alternative to traditional,
case-based unit testing that offered markedly different cost, benefit,
and mental models for testing. There are many times when case-based
testing sucks, and for these times LectroTest offers programmers
another option. The two approaches complement each other and can even
be seen as duals.
The LectroTest approach requires programmers to be explicit about what
their programs are supposed to do. Programmers must write property
specifications that define the required behaviors of their
programs. Then LectroTest uses random sampling to automate the process
of gathering evidence to support (or refute) the claims made by the
property specifications.
Case-based testing, on the other hand, requires programmers to
write individual test cases that each provides incremental evidence
for (or against) some implied claim of overall correctness. Together,
the test cases represent an implicit definition of correctness, but
such definitions are usually difficult to extrapolate from
the cases and often are nebulous and incomplete, which isn't necessarily a
bad thing: in real life, formal notions of correctness may be hard
to define.
A table makes the salient differences easy to see:
|
LectroTest |
Case-based testing |
Definition of correctness |
Explicit (via hand-written properties) |
Implicit (via manual extrapolation) |
Test cases |
Implicit (automatically generated) |
Explicit (written by hand) |
Which approach is best depends on what you are doing. As a rule of
thumb, if you can easily specify what a piece of code ought to do,
there's a good chance that LectroTest will be a great way to test that
code. If, however, you are working on code for which correctness is a
difficult concept to formalize, case-based testing will probably
be the more fruitful approach.
Cheers, Tom
| [reply] |
I think your critique of my simplistic example is valid, and suggest looking at the tutorial mentioned at the end of this node for a better demonstration of specification based testing. My point was using tcon->label() in a hackish way to find a distribution. Please file my ramblings under TIMTOWTDI :)
On reflection, maybe the point of Test::LectroTest is to try to expose the edge cases in your dependencies outside your own conditionals -- sqrt and division by zero come to mind. But I'd call it "stress testing" in that case and suggest that it is different from the way the term "testing" is usually meant in the various perl test suites. It doesn't tell you that your code is correct, only that it hasn't been shown to be incorrect for some number of trials.
I think that observation is correct. Specification based testing, which Test::LectroTest implements a framework for is based on the idea that you formulate the constraints. Then you leave it to the computer to try to violate your assumptions within the given constraints. This is one testing tool among many, and I find the technique useful, if only to allow myself to be humbled by my machine from time to time.
If you haven't seen tmoertel's (Test::LectroTest's author) excellent tutorial, I suggest taking a look. It demonstrates why manually testing edge cases in some cases is not enough.
Thank you for your comments!
pernod
--
Mischief. Mayhem. Soap.
| [reply] |
I hadn't seen the tutorial, but the presentation had the same angular differences example. I see the point, but don't find it compelling because of the contrived nature of the manual testing. E.g. the "bad" manual example only uses positive numbers, and never bothers to test the obvious edge case:
return abs($a - $b) % 180
This edge case is either side of the modulo 180, and a quick examination of the code (without even testing) shows that it's impossible to ever have an angular difference of 180 degrees. Even ignoring the code for a moment, the real edge cases that thoughtful manual testing should have checked are the edges of acceptable output -- zero angular difference and 180 degrees of angular difference.
At a certain point in the tutorial, the author refines the problem as so:
If you think about it, our recipe above is actually a specification of a general property that our implementation must hold to: "For all angles a and for all angles diff in the range -180 to 180, we assert that angdiff($a, $a + $diff) must equal abs($diff)."
Testing differences of -180, -1, 0, 1, and 180 is sufficient -- the random testing in between doesn't add additional information. (And this principle extends to the later example of differences greater than 180 or even 360 degrees.) My point is that if you understand the problem space well enough and specify the expectation well enough, ordinary tests are easily sufficient. So you can use Test::LectroTest, or just this:
for ( -180, -1, 0, 1, 180 ) {
is( angdiff(0, $_), abs $_, "angdiff (0,$_)" );
}
Let me be fair -- I think Test::LectroTest could be a very useful tool for exploring a poorly understood problem space by generating lots of test cases for examination, but I wouldn't use it as a first-line-of-defense testing tool.
-xdg
Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.
| [reply] [d/l] [select] |
My point is that if you understand the problem space well enough and specify the expectation well enough, ordinary tests are easily sufficient.
If you understand the problem well enough, you might as well try and mathematically prove that your program conforms to the specification, and dispense with the tests all together.
| [reply] |