http://qs321.pair.com?node_id=200767

As some of you may know, I've recently been exposed to the joys of testing. What you may not know is the experiences that I have had and what has finally converted me from a supporter of testing to someone who actively enjoys it.

Recently, a client asked us to make some changes to a Web site that required me to redesign part of the database. Part of my proposal was to spend time developing a test suite to cover those areas that would be affected. I started out by write a few small tests for a security object that I create.

#!/usr/bin/perl -w use strict; use Test::More 'no_plan'; use Test::MockObject; use constant MODULE => 'XXXXXX::Security'; my $mock = Test::MockObject->new(); use_ok( MODULE ); can_ok( MODULE, 'new' ); $mock->add( cookie => sub {} ); my $sec = XXXXXX::Security->new; isa_ok( $sec, MODULE ); # testing null information my @results = $sec->validate( $mock ); ok( ! @results, 'No cookie, user, or pass should fail to validate');

That validate() method was the subject of my fourth test. As it turns out, I was getting data back when I wasn't expecting any. Oddly, this code has been running in production for well over a year with only one known bug, and this was an intermittant one that we could never track down. Since all this bug did was force people to log back in and it did not happen often enough to irritate our clients, it was given a low priority. What's worse, since I could never replicate the bug, I couldn't find it.

The fourth test that I wrote, after spending a whopping 5 minutes on this, found the bug. I was astonished. I deliberately chose the most stable module and instantly found a bug that I had never been able to track down. Hooray for testing!

As things went further, I tested the following function:

sub add_product { my ( $self, $data ) = @_; my $categoryID = $data->{categoryID}; delete $data->{categoryID}; $self->_generic_insert( $data, 'products', DONT_COMMIT ); return if $self->error; my %category = ( productID => $self->_get_identity, categoryID => $categoryID ); $self->_generic_insert( \%category, 'category_product', COMMIT ); }

The code worked perfectly and didn't require any changes. However, as I started adding to my test suite, that code started failing for no apparent reason. As it turns out, because the Web is essentially stateless and since we were running this under ISAPI, we would typically only have one method call of this type per script invocation. When trying to run sequentially run several of these types of methods, an error on a previous method call that shouldn't affect this one caused it to fail, because I had never bothered to clear the error!

We hope to eventually port this to mod_perl, but if I tried to run this code under mod_perl with persistant objects, this would have caused massive failures. Hooray for testing again!

If this wasn't enough humiliation for me (I wrote much of this code), I discovered that tests for similar functions were often significantly different in structure -- I had inconsistent APIs. Had I known about testing when I wrote this and started with "test first" methodology, this never would have happened.

I now enjoy testing. Being able to say "my tests pass", as opposed to "gosh, I think it works", is a great feeling. I'm finding bugs I never knew were there, discovering API issues that I had never known about, and when code is hard to test, I realize that I have a design problem. Pure pleasure :)

Cheers,
Ovid

Update: Edited out an essentially duplicate closing paragraph that blyman pointed out. Thanks blyman :)

Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.

Replies are listed 'Best First'.
Re: Further adventures with testing
by trs80 (Priest) on Sep 26, 2002 at 02:06 UTC
    A few months back I bought the book "The Pragmatic Programmer" and read it from front to back and felt it a great addition to my book collection. I have applied many of things they talk about and have found that my performance is improving.

    Last year in October I had started down the road of OO everything, some of you can hopefully relate. So all this good OO thinking was going about and I had "improved" my line count, abstraction (to some extent) and code reuse, but I was just not getting where I thought I should be in volume of code produced. In fact I felt it was more difficult to write my (at the time ) throw away "test" scripts.

    Last month I started using Test::More with more seriousness. I started using the "Don't test the big end, test the small end" mentality. That is test the smallest amount of code you can and then work your way up. This helped me find ways to split apart different sections of my existing modules into much simpler methods that performed part of, rather then all of the work.

    Now I don't have anyone to answer to for the appearance or style of my code, except for myself and I think my code should improve even if the clients will never look at or understand what I have done. So these improvements ( or perceived improvements, time will tell ) are all self driven and constructed from various points of exposure. While I didn't reference any material directly while writing this, I can't say that I didn't paraphrase.

    Testing Code

    I had thought for sometime that writing test code was a difficult annoying task ( lack of information, I had never really tried to write real tests before ) that was only helpful if you had 10_000+ lines of code to write. How wrong could I be. I took some time and invested in getting past the learning curve on some of the various Test modules that are available and this is what I found:

    • They ARE easy to use once you try them
    • They DO increase your productivity or at least get you to more accurate results quicker then without.
    • They HELP you find bugs that you might otherwise have missed.


    I settled on Test::More for most of my testing scripts. With most of the Test modules it is important to remember 1 (one) key thing. True or False

    Thats it! Thats all you are testing. Is it or is itn't. Once I saw that is all I was really doing I started liking to write tests. Now there is more you can apply to the mentality and practice of testing, but I think if you are just starting out keep it simple.

    Here is simple testing script that runs 6 tests and fails two of them:



    UPDATE: missed some semi colons in my code blocks, maybe I should have tested it, how ironic :) and added a readmore tag.
Re: Further adventures with testing
by Zaxo (Archbishop) on Sep 26, 2002 at 01:49 UTC

    I recently had a similar revelation with new development of a module. It was my first venture into XS and I was a little at sea. It wasn't a complicated thing, but I got lost.

    To start fresh, I began by writing the pod. That cleared up what I wanted to do, and gave me the API to work for. Then I wrote tests for that API, and for all the loading, constructor and destructor bits. Thirty tests covered it all.

    With that new focus, it was nearly trivial to finish. I'd picked up enough XS in floundering around to fix up the typemaps so they only covered things I needed. With the API clear in my head, I was able to write XS to do those things.

    perl Makefile.PL... make... make test... make clean... repair... perl Makefile.PL... make... make test... Done.

    Magical.

    After Compline,
    Zaxo

Re: Further adventures with testing
by jepri (Parson) on Sep 26, 2002 at 05:18 UTC
    I was taught to do testing the classical way: a fileserver I wrote went wrong and deleted some files that I really wanted to keep. I spent the next week writing tests and now I have a fairly nice webDAV regression suite. And like you mentioned, I found a whole raft of odd bugs that I hadn't been able to track down before.

    Now I have the confidence that my code (probably) won't blow up and delete someones work.

    ____________________
    Jeremy
    I didn't believe in evil until I dated it.

Testing, schmesting.
by belden (Friar) on Sep 26, 2002 at 19:44 UTC
    Bah, testing. Who needs it? The list of reasons not to test is weighty:
    1. Testing takes time. I'm already working on tight deadlines. You want me to take extra time to test that each and every permutation of every single-dingle subroutine or method works the way I expect it to? fsck that, I'd rather move on to my next project. Let the QA department do their own durn work.
    2. Testing is different. I've never had to do it in the past. Sure, I've run across some really head-scratching bugs in the past that took me an hour or a day or a week to track down- but I'd rather fix bugs than slow down production.
    3. Testing is optional. "Optional", of course, means "not required": my managers haven't told me that it's "test your code or hit the road". If they do, I'll probably get another job: who are they to question my ability to do my job correctly the first or second (or, okay, there was that one time, seventh) time around?
    4. Testing is counter-cultural. The other programmers aren't doing it; why should I?

    Now, before reading such Meditations such as this one, or chromatic's testing articles on perl.com, I *seriously* believed the above. (Well, partially believed some of it anyway.)

    If you learn quickly, you only need to make one mistake before taking up testing as part of your development process. I learn pretty slowly, though: it's taken me many mistakes, and even more meditations and articles than I care to link to. What changed my mind? I can't say. Experience, to be sure. However, experience just showed me that I needed to do something different - not what I should be doing different, or how.

    Monks - if you found yourself agreeing with my list above - re-read Ovid's node. Re-read Zaxo's reply. Dissect trs80's sample testing code. Super Search for nodes that talk about testing. You can afford to be picky in authorship: everyone who's anyone has written about the benefits of testing somewhere within the Monastery.

    And give testing a shot. Through the Monastery I've become aware of the importance of good coding practices: use strict; use warnings; profile before optimizing; seek thee a better algorithm; test, test, and test again.

    blyman
    setenv EXINIT 'set noai ts=2'

Re: Further adventures with testing
by jordanh (Chaplain) on Sep 27, 2002 at 11:24 UTC

    I want to be on record as saying that i agree with everything that's being said about testing. Testing more, especially if the tests are systematized so that they can be used productively in the future, is almost always a good thing. I do want to point out one thing, however.

      I now enjoy testing. Being able to say "my tests pass", as opposed to "gosh, I think it works", is a great feeling.

    I've met with groups who have tested extensively and then interfaced with a system I support. Something didn't go right and rather than trying to help find the problem their attitude was "We tested extensively, it broke long after installation, what did YOU change." Now, these groups had no visibility into how or how much we tested our part, they were just made arrogant by their assumptions that their testing was exhaustive.

    In two cases of this I had recently, the problems were from things that their testing didn't encompass. One was a subtle timing problem, the other was an input that was very large.

    The humble tester must recall Dijkstra's words:

    Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.

    The competent programmer is fully aware of the strictly lmited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague.

    --The Humble Programmer

    The programmer must be aware of the limits of their testing and not allow their adherence to good practice to give them too much confidence. The attitude should be "Gosh, I think it works, and I'm more sure because it passed my tests.".

    Update: 09/27/2002 17:02 EDT: Noticed that my wording was awkward. I referred to "In both cases" above without properly introducing this. I changed the wording to "In two cases of this I had recently...".

Re: Further adventures with testing
by jplindstrom (Monsignor) on Sep 26, 2002 at 18:26 UTC
    Further, I've discovered that code that is hard to test is likely to be poorly designed.

    This is my experience as well. Not only that, when you prepare for testing, you improve the code. You refactor the code into something better at the easiest time: _when_coding_ it in the first place, when you have everything 100% fresh in your mind.

    So, to reiterate (once again) why it's a good thing(tm):

    • Better code structure
    • Less bugs (because you test boundary conditions and challenge assumptions)
    • Confidence

    Personally I sometimes find it _easier_ to write-tests-as-you-go, because the code is actually run a little at a time as it grows (as opposed to when the module is finished and in the complex context of the entire program).

    /J