Beefy Boxes and Bandwidth Generously Provided by pair Networks
Come for the quick hacks, stay for the epiphanies.
 
PerlMonks  

Re^5: [RFC] Module code and POD for CPAN - Testing and test file

by hippo (Bishop)
on Apr 15, 2021 at 22:02 UTC ( [id://11131351]=note: print w/replies, xml ) Need Help??


in reply to Re^4: [RFC] Module code and POD for CPAN - Testing and test file
in thread [RFC] Module code and POD for CPAN

There are 30 tests I have created but gmake test says there are 31. Does diag count as a test or does something else account for the discrepancy?

No, diag is not a test, but use_ok is - were you including that in your count?

If plan tests => 31 is included I get the error Parse errors: Plan (1..31) must be at the beginning or end of the TAP output. However, plan tests => 31 was autogenerated. I have to use Test::More tests => 31; to get rid of the error. I cannot find any explanation of the error so what does it actually mean?

Again, this is down to use_ok but this time it is because the use_ok is in the BEGIN block and therefore happens before the plan.

You also have lots of tests like this:

ok( scalar( $stripe->list_products ) == 3, 'Third product added to Tr +olley' );

But I would write this instead as:

is scalar( $stripe->list_products ), 3, 'Third product added to Trolle +y';

as that will give you better diagnostics if it fails. The ok test should really be used sparingly as all it can test is the truth of its argument. Much better to have something either quantitative or qualitative.

Also, I wouldn't necessarily put all these in 00-load.t as very few of them are to do with loading. You can have as many separate test scripts as you want and by making each one topic-sensitive you can make each one reasonably self-contained and hopefully more manageable.

Are there any other glaringly obvious tests that should be included, but that have been left out?

Too hard to tell without going through your code in detail, but that's why we have Devel::Cover. This will show what you are (and more importantly are not) testing.

However, I cannot see a way to test getting an intent as that requires a valid API key. Something that will not be available to the installation scripts. So I have called the method and checked that the success method returns false. Is there a better way to handle the lack of API key?

You have 2 options here. Firstly you can make the key available to the installation scripts via the environment. This is good because it really tests the interaction with the remote service but may be bad (for some services) because it may require an actual transaction to occur to test your code. Hopefully any remote service you have to deal with will have a testbed.

The other option is mocking. Conversely to the previous option, this is good because it does not require a valid key nor an actual interaction with the remote service but is bad (for you as maintainer) because you will have to work to keep it in step with any interface changes made by the remote service provider. I rather like Test::MockModule for this but there are plenty of other options on CPAN to help you with it.


🦛

Replies are listed 'Best First'.
Re^6: [RFC] Module code and POD for CPAN - Testing and test file
by Bod (Parson) on Apr 16, 2021 at 14:26 UTC
    No, diag is not a test, but use_ok is - were you including that in your count?

    That explains my inability to count!

    Devel::Cover is installing as I type :)

    Also, I wouldn't necessarily put all these in 00-load.t as very few of them are to do with loading.

    I shall go and start splitting up the tests...

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11131351]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others scrutinizing the Monastery: (3)
As of 2024-04-26 03:02 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found