Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
I have started working for my new project which is building of a regression suite. This suite should generate test cases and execute them.
I am unable to understand how one should approach this. I have been reading on few papers for the same but I am finding them to be confusing.
Can anyone please give me an insight where to begin and how to design the test case generator or if anyone has already gone through some good material to read for the above.
My regression suite will basically test an in house stand alone utility which has a config file as input.
|Replies are listed 'Best First'.|
Re: Test Case Generator
by Corion (Pope) on Oct 03, 2012 at 07:36 UTC
It really depends on what you want to test for.
As you already do have an existing application, the easiest test is to verify that everything still works, by running the program with a set of fixed input files, creating new output files and verifying them against the existing, known good output files.
I'm not sure where you have problems with that and what you have tried already, so maybe you can tell us more here?
There is no fairy magic that you can sprinkle over a program to magically generate test cases. Creating test cases requires a human to write down the input parameters and the expected outcome.
Re: Test Case Generator
by ELISHEVA (Prior) on Oct 04, 2012 at 19:31 UTC
There are several kinds of generated tests I use:
Crash and burn tests: To do this testing you need to have documentation on the valid value ranges for a function or method's parameters. The very act of writing up these tests can point out problems in documentation and incomplete code even before you write the test generation module. There are two types:
Consistency tests: consistency tests verify that the output of two function calls/methods (or repeated calls to a single method) are mutually consistent. Some examples:
It should be stressed that the quality of consistency testing is VERY dependent on the initial values of the test object. For instance, if all of the parameters passed to a constructor are the same value and stored unchanged, then the round trip test described above would have little value. The getters all return the same value and they can't be used to verify that data is being stored in the right slots. Automated consistency testing should usually be coupled with (a) a few carefully designed test-pattern objects with handcrafted sets of return values for their function calls. (b) code that sanity tests randomly generated data/objects to make sure that they will create useful test objects. For example, one could verify that each parameter passed to a constructor has a different value.
Static/stable result tests: Some methods and functions are expected to return specific values no matter what data is passed to them. Here the generated test generates random input and verifies that it has no effect on the return value. For example, a constructor for a singleton class should return the same object no matter how many times it is called.
Environment sensitivity tests: this involves generating perturbations in the environment, e.g. changing environment variable values or or other aspects of the execution environments around the object in various ways to make sure that the object maintains its expected state in a variety of execution contexts. For example, one might want to verify that the object continues to be well behaved regardless of whether it is created and run via the command line or a daemon.
Load testing: This involves generating various load levels to make sure that object performs within tolerance ranges at those load levels
One danger in any automated test generation project is that the test harness itself can be buggy. That's another reason why it is important to combine any generated test suites with hand crafted ones. Then again hand crafted test suites can also be error prone (do I have a bug? 5+4 didn't add up to 10! Oops that was a typo in my expected result!). Each can therefore act as a check on the other.
Re: Test Case Generator
by sundialsvc4 (Abbot) on Oct 03, 2012 at 13:26 UTC
As you can imagine, building a really good test-case requires thought, not automation. You usually need to work carefully with the developer, but in the role of “the devil’s advocate who came from Missouri.”
Perl has definitive test-suites such as Test::Most, which of course means that the tests are themselves programs ... so you can use loops and such in writing them. Every CPAN module includes its own test-case suite (generally, the file-names end with “.t”), which you
IMHO, there are three or so things which a good test-suite should definitely cover:
The developer can’t be the tester. It takes a different mind-set, I think. People definitely trend either toward “writing it” or “blowing holes in it.” In any case, whether consciously or unconsciously, a developer’s own self-written tests (although she should be asked to provide some ...) won’t be complete and will tend to be too kind. The relationship between the two of you of course is not adversarial; you are working toward a single mutual goal, and if both of you do it well, that pager will remain silent all night long. (“Priceless™ ...”)
I think that the job of the tester/QA team is really the most important job in the world, because, as I said, only the computer itself can be the guardian of its own correctness. The hardest thing about any flaw is finding it. New flaws
Smoke Tests: These are tests which try to simulate what happens when stuff is hitting the fan; that is to say, in actual service. These tests cover every user-accessible or client-accessible control or API, every use-case, and (in the case of UI testing) they do so in an unpredictable but legitimate sequence.
(Some pretentious typography elided). Do you have a reference that supports your definition of smoke test?
In computer programming and software testing, smoke testing is a preliminary to further testing, intended to reveal simple failures severe enough to reject a prospective software release.From stack overflow question:
Smoke test: A simple integration test where we just check that when the system under test is invoked it returns normally and does not blow up. It is an analogy with electronics, where the first test occurs when powering up a circuit: if it smokes, it's bad.From SearchWinDevelopment:
Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.
Update: some more definitions from the stack overflow question:
Smoke testing: first tests on which testers can conclude if they will continue testing.
Smoke testing is done as a quick test to make sure everything looks okay before you get involved in the more vigorous testing.
Of course the developer can be the tester -- What about test driven development?
If a developer isn't able to test their own code then they're not think logically about what they are doing.