Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Test Case Generator

by Anonymous Monk
on Oct 03, 2012 at 07:26 UTC ( [id://996995]=perlquestion: print w/replies, xml ) Need Help??

Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hello Monks,

I have started working for my new project which is building of a regression suite. This suite should generate test cases and execute them.
I am unable to understand how one should approach this. I have been reading on few papers for the same but I am finding them to be confusing.
Can anyone please give me an insight where to begin and how to design the test case generator or if anyone has already gone through some good material to read for the above.
My regression suite will basically test an in house stand alone utility which has a config file as input.

Replies are listed 'Best First'.
Re: Test Case Generator
by Corion (Patriarch) on Oct 03, 2012 at 07:36 UTC

    It really depends on what you want to test for.

    As you already do have an existing application, the easiest test is to verify that everything still works, by running the program with a set of fixed input files, creating new output files and verifying them against the existing, known good output files.

    I'm not sure where you have problems with that and what you have tried already, so maybe you can tell us more here?

      Hey,
      Thanks for prompt reply! Well, right now I do not have access to the manually tested verified setup files. I will have to first make sure if every feature of the stand alone utility is working fine and matches to the expected outcome as desired by the developer.
      I am working with same approach in mind as you said about verifying the output of each setup file. I am stuck with understanding of automated test case creation.
      How this happens in commercially available testing tools?
      How can I generate umpteen number of test cases for a feature available in the stand alone application?

        There is no fairy magic that you can sprinkle over a program to magically generate test cases. Creating test cases requires a human to write down the input parameters and the expected outcome.

Re: Test Case Generator
by ELISHEVA (Prior) on Oct 04, 2012 at 19:31 UTC

    There are several kinds of generated tests I use:

    Crash and burn tests: To do this testing you need to have documentation on the valid value ranges for a function or method's parameters. The very act of writing up these tests can point out problems in documentation and incomplete code even before you write the test generation module. There are two types:

    1. You randomly generate data within the range for each parameter that can be passed to a function. Then you pass the randomly generated data to the function to see if it dies when good data is passed. I also like to pass boundary condition data in additional to mid range randomly generated data.
    2. If the function is supposed to be doing its own bounds checking, you generate out of bounds data to make sure it DOES die. If it doesn't throw the expected error message or return value, it fails the test.

    Consistency tests: consistency tests verify that the output of two function calls/methods (or repeated calls to a single method) are mutually consistent. Some examples:

    • Making sure that two calls to a toggle function returned the original value.
    • Verifying that $oFoo->isWidget($oAllegedWidget) returns true if $oAllegedWiget is a member of the array returned by $oFoo->getAllWidgets().
    • Round trip testing. For example, one could call all the getters of an object to get its data and then passing the data to the constructor to create a new object. Then one verifies that all of the getters on the old and new object return the same values. This is a good way to make sure that constructors are storing data in the slots they are supposed to be stored in.

    It should be stressed that the quality of consistency testing is VERY dependent on the initial values of the test object. For instance, if all of the parameters passed to a constructor are the same value and stored unchanged, then the round trip test described above would have little value. The getters all return the same value and they can't be used to verify that data is being stored in the right slots. Automated consistency testing should usually be coupled with (a) a few carefully designed test-pattern objects with handcrafted sets of return values for their function calls. (b) code that sanity tests randomly generated data/objects to make sure that they will create useful test objects. For example, one could verify that each parameter passed to a constructor has a different value.

    Static/stable result tests: Some methods and functions are expected to return specific values no matter what data is passed to them. Here the generated test generates random input and verifies that it has no effect on the return value. For example, a constructor for a singleton class should return the same object no matter how many times it is called.

    Environment sensitivity tests: this involves generating perturbations in the environment, e.g. changing environment variable values or or other aspects of the execution environments around the object in various ways to make sure that the object maintains its expected state in a variety of execution contexts. For example, one might want to verify that the object continues to be well behaved regardless of whether it is created and run via the command line or a daemon.

    Load testing: This involves generating various load levels to make sure that object performs within tolerance ranges at those load levels

    One danger in any automated test generation project is that the test harness itself can be buggy. That's another reason why it is important to combine any generated test suites with hand crafted ones. Then again hand crafted test suites can also be error prone (do I have a bug? 5+4 didn't add up to 10! Oops that was a typo in my expected result!). Each can therefore act as a check on the other.

Re: Test Case Generator
by sundialsvc4 (Abbot) on Oct 03, 2012 at 13:26 UTC

    As you can imagine, building a really good test-case requires thought, not automation.   You usually need to work carefully with the developer, but in the role of “the devil’s advocate who came from Missouri.”

    Perl has definitive test-suites such as Test::Most, which of course means that the tests are themselves programs ... so you can use loops and such in writing them.   Every CPAN module includes its own test-case suite (generally, the file-names end with “.t”), which you can definitely should use for ideas and examples.

    IMHO, there are three or so things which a good test-suite should definitely cover:

    • Normal Cases:   Every single one of them, every variation that could be encountered under normal conditions with valid inputs in normal operational situations.   Build up your tests from simple unit-tests of expected atomic behaviors, to more complex scenarios involving units working correctly together (which by that point can be confident that the primitive operations they are invoking do work properly).

    • Edge Cases and Error Cases:   The computer, itself, must be the guardian of its own correctness, because there is no one else who could possibly do so.   Test the boundaries between right and wrong:   look for that “<=” condition that was only supposed to be “<” ... divide by zero and see what happens.   Stuff incorrect and al-most incorrect inputs in, and see what happens.   Write tests that explicitly determine what should happen in those error cases:   the test fails if it doesn”t throw an exception, or if it’s the wrong one.

    • Sh*t Smoke Tests: These are tests which try to simulate what happens when stuff is hitting the fan; that is to say, in actual service.   These tests cover every user-accessible or client-accessible control or API, every use-case, and (in the case of UI testing) they do so in an unpredictable but legitimate sequence.   In particular, they are looking for holes.   The things that have a 100% probability of occurring when Steve Jobs (RIP), Steve Ballmer, Larry Ellison or Bill Gates is performing a demo for the world.

    The developer can’t be the tester.   It takes a different mind-set, I think.   People definitely trend either toward “writing it” or “blowing holes in it.”   In any case, whether consciously or unconsciously, a developer’s own self-written tests (although she should be asked to provide some ...) won’t be complete and will tend to be too kind.   The relationship between the two of you of course is not adversarial; you are working toward a single mutual goal, and if both of you do it well, that pager will remain silent all night long.   (“Priceless™ ...”)

    I think that the job of the tester/QA team is really the most important job in the world, because, as I said, only the computer itself can be the guardian of its own correctness.   The hardest thing about any flaw is finding it.   New flaws can will be introduced by anyone at any time.   The only assurance that you have that the code is still correct is that battery of tests ... which, as you can see from significant CPAN modules, might number in the thousands.   (If you want to know the major reason why the Perl system is thought of as a reliable bastion tool of this industry, you’re looking at why.)

      Smoke Tests: These are tests which try to simulate what happens when stuff is hitting the fan; that is to say, in actual service. These tests cover every user-accessible or client-accessible control or API, every use-case, and (in the case of UI testing) they do so in an unpredictable but legitimate sequence.

      (Some pretentious typography elided). Do you have a reference that supports your definition of smoke test?

      From Smoke testing (wikipedia):

      In computer programming and software testing, smoke testing is a preliminary to further testing, intended to reveal simple failures severe enough to reject a prospective software release.
      From stack overflow question:
      Smoke test: A simple integration test where we just check that when the system under test is invoked it returns normally and does not blow up. It is an analogy with electronics, where the first test occurs when powering up a circuit: if it smokes, it's bad.
      From SearchWinDevelopment:
      Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.

      Update: some more definitions from the stack overflow question:

      Smoke testing: first tests on which testers can conclude if they will continue testing.

      Smoke testing is done as a quick test to make sure everything looks okay before you get involved in the more vigorous testing.

      Of course the developer can be the tester -- What about test driven development?

      If a developer isn't able to test their own code then they're not think logically about what they are doing.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://996995]
Approved by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others taking refuge in the Monastery: (3)
As of 2024-04-19 14:58 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found