Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"

Optional modules for tests?

by sutch (Curate)
on Sep 06, 2003 at 02:11 UTC ( #289404=perlquestion: print w/replies, xml ) Need Help??

sutch has asked for the wisdom of the Perl Monks concerning the following question:

I've created a module that contains some tests that rely on Struct::Compare. This module is not a module that is included with the default Perl installation, so the module that I've created must either require Struct::Compare or skip the tests that rely on it.

I'm looking for suggestions on how to handle this. Is it better to require modules that are only needed for testing, or to skip the tests with a message to the user?

How can tests be skipped (preferably, using Test::More) when a module is not available?

Replies are listed 'Best First'.
Re: Optional modules for tests?
by tachyon (Chancellor) on Sep 06, 2003 at 06:33 UTC
    # Test::More SKIP: { eval { require Some::Module }; skip "Some::Module not installed", 2 if $@; my $obj = new Some::Module; isa_ok( $obj, "Some::Module" ); $obj->parse( $stuff ); is( $obj->errors, 0, "No errors found" ); } # Test syntax eval{ require Some::Module } if ( $@ ) { skip( 'Some Module not installed' ); } else { ok( $some_test ); }




      Thanks for the response.

      I've been experimenting with this and have minimized my test code to the following:

      use Test::More tests => 1; SKIP: { eval{ require Data::Dummy }; skip "Data::Dummy not installed", 2 if $@; ok( 1 ); }
      This works fine when the required module is present, but in the case of a non-existant module (such as with the above Data::Dummy), the following is reported:
      C:\>nmake test Microsoft (R) Program Maintenance Utility Version 1.50 Copyright (c) Microsoft Corp 1988-94. All rights reserved. C:\Perl\bin\perl.exe "-MExtUtils::Command::MM" "-e" "test_harn +ess(0, 'blib\lib', 'blib\arch')" t\test.t t\test....# Looks like you planned 1 tests but ran 1 extra. t\test....dubious Test returned status 1 (wstat 256, 0x100) DIED. FAILED test Failed 0/1 tests, 100.00% okay (less 2 skipped tests: -1 okay, + -100.00%) Failed Test Stat Wstat Total Fail Failed List of Failed ---------------------------------------------------------------------- +--------- t\test.t 1 256 1 0 0.00% 2 subtests skipped. Failed 1/1 test scripts, 0.00% okay. -1/1 subtests failed, 200.00% oka +y. NMAKE : fatal error U1077: 'C:\WINDOWS\system32\cmd.exe' : return code + '0x2' Stop.
      I don't understand why this would complain of skipping two subtests, nor why it says that I planned 1 test but ran 1 extra. Is this what normally happens when tests are skipped?
        skip "Data::Dummy not installed", 2 if $@; # ^ ^ # MESSAGE, NUM_TESTS_TO_SKIP

        You are only skipping 1 test but you are telling Test::More you are skipping 2 but your plan was only to do 1 test.....

        skip "Data::Dummy not installed", 1 if $@; ^ ^

        Will work just fine.....I even tested it ;-) Test::Harness hates plan != number of tests reported. See my Autogenerate Test Scripts for a neat little widget that will re-number your tests and fix your plan as well as writing most of the Test code for you.




Re: Optional modules for tests?
by adrianh (Chancellor) on Sep 06, 2003 at 13:58 UTC
    I'm looking for suggestions on how to handle this. Is it better to require modules that are only needed for testing, or to skip the tests with a message to the user?

    Three options:

    1. Include Struct::Compare in the PREREQ_PM of your modules Makefile.PL.
    2. If it's a pure-perl module then package it in with your tests. Stick it in t/lib and add an appropriate use lib line to the test scripts that need it.
    3. Skip the tests. If you need to skip a few tests in a test script you can use the method tachyon showed. If you need to skip an entire script it might be easier to use skip_all something like this (untested code):
      BEGIN { use Test::More; eval 'use Struct::Compare'; Test::More->builder->skip_all("need Struct::Compare") if $@; }; # rest of test script here

    As to which is better... harder call. My personal tendency would be to do (1) if the tests were about vital functionality, and (3) if it related to things like checking documentation.

    For example in Test::Class I require Test::Differences and a few other modules just for testing in the Makefile.PL. However the test I use for documentation are skipped if the relevant modules (Pod::Coverage, Pod::Checker and IO::String) are not available.

    Also - you might want to look at Test::Deep if you're testing complicated structures. Might make your job easier.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://289404]
Approved by Mr. Muskrat
Front-paged by broquaint
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (6)
As of 2021-04-14 05:55 GMT
Find Nodes?
    Voting Booth?

    No recent polls found