Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re^4: [RFC] Module code and POD for CPAN - Testing and test file

by Bod (Parson)
on Apr 15, 2021 at 17:23 UTC ( [id://11131338]=note: print w/replies, xml ) Need Help??


in reply to Re^3: [RFC] Module code and POD for CPAN
in thread [RFC] Module code and POD for CPAN

The documentation has now been updated taking into consideration all the feedback people have kindly provided.

I've gone through the documentation and written several tests for each method. Some time ago I read an article about writing tests and cannot find it now. But the one thing that stood out from it was to start writing tests based on the documentation...so that is about what I have done. However, I cannot see a way to test getting an intent as that requires a valid API key. Something that will not be available to the installation scripts. So I have called the method and checked that the success method returns false. Is there a better way to handle the lack of API key?

This has generated some more questions:

There are 30 tests I have created but gmake test says there are 31. Does diag count as a test or does something else account for the discrepancy?

If plan tests => 31 is included I get the error Parse errors: Plan (1..31) must be at the beginning or end of the TAP output. However, plan tests => 31 was autogenerated. I have to use Test::More tests => 31; to get rid of the error. I cannot find any explanation of the error so what does it actually mean?

Are there any other glaringly obvious tests that should be included, but that have been left out?

00-load.t

#!perl use 5.006; use strict; use warnings; use Test::More tests => 31; #my $test_count = 31; #plan tests => $test_count; BEGIN { use_ok( 'Business::Stripe::WebCheckout' ) || print "Bail out!\n"; } diag( "Testing Business::Stripe::WebCheckout $Business::Stripe::WebChe +ckout::VERSION, Perl $], $^X" ); my $stripe = Business::Stripe::WebCheckout->new( 'api-public' => 'pk_test_00000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000', 'api-secret' => 'sk_test_00000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000', 'success-url' => 'https://www.example.com/yippee.html', 'cancel-url' => 'https://www.example.com/ohdear.html', ); ok( $stripe->isa( 'Business::Stripe::WebCheckout' ), 'Instantiation' ) +; ok( $stripe->success, 'Successful obje +ct creation' ); ok( scalar( $stripe->list_products ) == 0, 'Empty Trolley' + ); $stripe->add_product( 'id' => 'A', 'name' => 'One', 'description' => 'Test One', 'qty' => 1, 'price' => 100, ); ok( scalar( $stripe->list_products ) == 1, + 'First product added to Trolley' ); ok( $stripe->get_product(($stripe->list_products)[0])->{'id'} + eq 'A', 'Correct Product A ID' ); ok( $stripe->get_product(($stripe->list_products)[0])->{'name'} + eq 'One', 'Correct Product A Name' ); ok( $stripe->get_product(($stripe->list_products)[0])->{'description'} + eq 'Test One', 'Correct Product A Description' ); ok( $stripe->get_product(($stripe->list_products)[0])->{'qty'} + eq '1', 'Correct Product A Quantity' ); ok( $stripe->get_product(($stripe->list_products)[0])->{'price'} + eq '100', 'Correct Product A Price' ); $stripe->add_product( 'id' => 'B', 'name' => 'Two', 'description' => 'Test Two', 'qty' => 2, 'price' => 200, ); ok( scalar( $stripe->list_products ) == 2, + 'Second product added to Trolley' ); ok( $stripe->get_product(($stripe->list_products)[1])->{'id'} + eq 'B', 'Correct Product B ID' ); ok( $stripe->get_product(($stripe->list_products)[1])->{'name'} + eq 'Two', 'Correct Product B Name' ); ok( $stripe->get_product(($stripe->list_products)[1])->{'description'} + eq 'Test Two', 'Correct Product B Description' ); ok( $stripe->get_product(($stripe->list_products)[1])->{'qty'} + eq '2', 'Correct Product B Quantity' ); ok( $stripe->get_product(($stripe->list_products)[1])->{'price'} + eq '200', 'Correct Product B Price' ); $stripe->add_product( 'id' => 'C', 'name' => 'Three', 'description' => 'Test Three', 'qty' => 3, 'price' => 300, ); ok( scalar( $stripe->list_products ) == 3, + 'Third product added to Trolley' ); ok( $stripe->get_product(($stripe->list_products)[2])->{'id'} + eq 'C', 'Correct Product C ID' ); ok( $stripe->get_product(($stripe->list_products)[2])->{'name'} + eq 'Three', 'Correct Product C Name' ); ok( $stripe->get_product(($stripe->list_products)[2])->{'description'} + eq 'Test Three','Correct Product C Description' ); ok( $stripe->get_product(($stripe->list_products)[2])->{'qty'} + eq '3', 'Correct Product C Quantity' ); ok( $stripe->get_product(($stripe->list_products)[2])->{'price'} + eq '300', 'Correct Product C Price' ); $stripe->delete_product('B'); ok( $stripe->success, + 'Product removed from Trolley' ); ok( scalar( $stripe->list_products ) == 2, + 'Product count correct after removal' ); $stripe->delete_product('B'); ok( !$stripe->success, + 'Cannot remove product from Trolley that isn\' +t there' ); $stripe->delete_product('A'); ok( $stripe->success, + 'Another product removed from Trolley' ); ok( scalar( $stripe->list_products ) == 1, + 'Product count again correct after removal' ); my $intent = $stripe->get_intent; ok ( !$stripe->success, + 'Failed to get intent as invalid key' ); my $intent_id = $stripe->get_intent_id; ok ( !$stripe->success, + 'Failed to get intent_id as invalid key' ); my $ids = $stripe->get_ids; ok ( !$stripe->success, + 'Failed to get ids as invalid key' ); my $checkout = $stripe->checkout; ok ( !$stripe->success, + 'Failed to generate checkout HTML as invalid + key' ); # done_testing($test_count);

Also just to note in case anyone else finds this in future - the environment variable (on Strawberry Perl at least) is RELEASE_TESTING and not AUTHOR_TESTING

Replies are listed 'Best First'.
Re^5: [RFC] Module code and POD for CPAN - Testing env vars
by hippo (Bishop) on Apr 16, 2021 at 08:51 UTC
    the environment variable (on Strawberry Perl at least) is RELEASE_TESTING and not AUTHOR_TESTING

    These are both useful environment variables and they have subtly different uses.

    AUTHOR_TESTING should be set only by the author (or maintainer). There are any number of reasons to run author-only tests but these might include tests which run against some sample data set or server application to which only the author has access, etc. But they can also be used for simple tests which the users don't care about such as POD coverage, POD spell checks, etc..

    RELEASE_TESTING is for tests run just prior to release. Usually by the author again but these may be the sorts of things that even the author doesn't care about when hacking on the actual code. It is handy for meta-data tests: did you remember to bump the version number, is the Kwalitee OK, etc.

    There is also AUTOMATED_TESTING which allows skipping of anything requiring user interaction - handy for CI and the smoke testers.

    These three along with NONINTERACTIVE_TESTING and EXTENDED_TESTING are explained in the Lancaster Consensus. It is well worth a read through.

    In your case here there is also the important NO_NETWORK_TESTING which, if set, should prohibit anything relying on internet access such as real communications with the Stripe servers.


    🦛

Re^5: [RFC] Module code and POD for CPAN - Testing and test file
by choroba (Cardinal) on Apr 16, 2021 at 06:55 UTC
    > Is there a better way to handle the lack of API key?

    Coincidentally, I blogged about mocking when I was improving the tests of Net::Stripe.

    map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]

      What a nice solution...thanks for sharing.

      For my module I don't have to handle state changes at the Stripe end. Not unless I implement shipping plans which is possible but not a high priority. Therefore, I've set up a script on my server that can provide a valid session object or a Stripe style error response. That way I can test for both scenarios. I've only implemented the success test so far but the fail will follow. There is a new (undocumented) url parameter that the tests use to access the test server.

      Being done over HTTP to a live server means that testing can happen without having to install Test::LWP::UserAgent on the target machine. I just hope that 20 billion people don't decide to install the module simultaneously!!!

      The tests are now split accross four files:

      00-load.tt 01-trolley.tt 02-stripe.t 03-stripe-live.t
      02 tests against the Stripe server and, of course, fails. 03 tests against my server so it should be able to get a live connection.

      Update:

      Turns out it is not that simple! As always...

      A check was added that an HTTP request gets a 200 status code and skips the live server tests if not. The network could be down at install time or it could be installed on a machine with no network connection - little point but folk do strange things!

      I was finding that more often than not the tests were being skipped. Turns out something in the request is tripping the server monitoring systems and blocking my IP address for anything from a couple of minutes to half an hour. So the live tests have been disabled in the test file until I can find what is causing this problem.

      The obvious things have been tried. Removing the Authorization header from the test request and ensuring the POST command doesn't have an empty payload. Time to scratch my head and search for inspiration...

Re^5: [RFC] Module code and POD for CPAN - Testing and test file
by hippo (Bishop) on Apr 15, 2021 at 22:02 UTC
    There are 30 tests I have created but gmake test says there are 31. Does diag count as a test or does something else account for the discrepancy?

    No, diag is not a test, but use_ok is - were you including that in your count?

    If plan tests => 31 is included I get the error Parse errors: Plan (1..31) must be at the beginning or end of the TAP output. However, plan tests => 31 was autogenerated. I have to use Test::More tests => 31; to get rid of the error. I cannot find any explanation of the error so what does it actually mean?

    Again, this is down to use_ok but this time it is because the use_ok is in the BEGIN block and therefore happens before the plan.

    You also have lots of tests like this:

    ok( scalar( $stripe->list_products ) == 3, 'Third product added to Tr +olley' );

    But I would write this instead as:

    is scalar( $stripe->list_products ), 3, 'Third product added to Trolle +y';

    as that will give you better diagnostics if it fails. The ok test should really be used sparingly as all it can test is the truth of its argument. Much better to have something either quantitative or qualitative.

    Also, I wouldn't necessarily put all these in 00-load.t as very few of them are to do with loading. You can have as many separate test scripts as you want and by making each one topic-sensitive you can make each one reasonably self-contained and hopefully more manageable.

    Are there any other glaringly obvious tests that should be included, but that have been left out?

    Too hard to tell without going through your code in detail, but that's why we have Devel::Cover. This will show what you are (and more importantly are not) testing.

    However, I cannot see a way to test getting an intent as that requires a valid API key. Something that will not be available to the installation scripts. So I have called the method and checked that the success method returns false. Is there a better way to handle the lack of API key?

    You have 2 options here. Firstly you can make the key available to the installation scripts via the environment. This is good because it really tests the interaction with the remote service but may be bad (for some services) because it may require an actual transaction to occur to test your code. Hopefully any remote service you have to deal with will have a testbed.

    The other option is mocking. Conversely to the previous option, this is good because it does not require a valid key nor an actual interaction with the remote service but is bad (for you as maintainer) because you will have to work to keep it in step with any interface changes made by the remote service provider. I rather like Test::MockModule for this but there are plenty of other options on CPAN to help you with it.


    🦛

      No, diag is not a test, but use_ok is - were you including that in your count?

      That explains my inability to count!

      Devel::Cover is installing as I type :)

      Also, I wouldn't necessarily put all these in 00-load.t as very few of them are to do with loading.

      I shall go and start splitting up the tests...

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11131338]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others admiring the Monastery: (4)
As of 2024-04-20 00:37 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found