There are 30 tests I have created but gmake test says there are 31. Does diag count as a test or does something else account for the discrepancy?
No, diag is not a test, but use_ok is - were you including that in your count?
If plan tests => 31 is included I get the error Parse errors: Plan (1..31) must be at the beginning or end of the TAP output. However, plan tests => 31 was autogenerated. I have to use Test::More tests => 31; to get rid of the error. I cannot find any explanation of the error so what does it actually mean?
Again, this is down to use_ok but this time it is because the use_ok is in the BEGIN block and therefore happens before the plan.
You also have lots of tests like this:
ok( scalar( $stripe->list_products ) == 3, 'Third product added to Tr
+olley' );
But I would write this instead as:
is scalar( $stripe->list_products ), 3, 'Third product added to Trolle
+y';
as that will give you better diagnostics if it fails. The ok test should really be used sparingly as all it can test is the truth of its argument. Much better to have something either quantitative or qualitative.
Also, I wouldn't necessarily put all these in 00-load.t as very few of them are to do with loading. You can have as many separate test scripts as you want and by making each one topic-sensitive you can make each one reasonably self-contained and hopefully more manageable.
Are there any other glaringly obvious tests that should be included, but that have been left out?
Too hard to tell without going through your code in detail, but that's why we have Devel::Cover. This will show what you are (and more importantly are not) testing.
However, I cannot see a way to test getting an intent as that requires a valid API key. Something that will not be available to the installation scripts. So I have called the method and checked that the success method returns false. Is there a better way to handle the lack of API key?
You have 2 options here. Firstly you can make the key available to the installation scripts via the environment. This is good because it really tests the interaction with the remote service but may be bad (for some services) because it may require an actual transaction to occur to test your code. Hopefully any remote service you have to deal with will have a testbed.
The other option is mocking. Conversely to the previous option, this is good because it does not require a valid key nor an actual interaction with the remote service but is bad (for you as maintainer) because you will have to work to keep it in step with any interface changes made by the remote service provider. I rather like Test::MockModule for this but there are plenty of other options on CPAN to help you with it.
|