Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Re^2: test clean up vs. test database

by perrin (Chancellor)
on Jul 18, 2007 at 16:01 UTC ( [id://627286]=note: print w/replies, xml ) Need Help??


in reply to re: test clean up vs. test database
in thread Test Technique: Self-removing test data

I haven't tried tearing down the database between scripts. I use the method you described. However, I think the only fixture data you would need is whatever you already have to create a new database (in my case, a script that drops and recreates the tables, and fills in lookup values). The test-specific data is added by the test in either approach.

Even with an END block, it's possible for your test to die in a way where it won't be able to effectively remove the data you added. That's what makes this approach unsafe for use on a production database.

It's still faster than dropping the database and recreating it, but as things got more complex, I spent a lot of time troubleshooting problems with deleting the test data, and I think it would have been wiser ultimately to trade a little test speed for the saved debugging time.

Another wrinkle is web testing with Mechanize on code that creates data. If your web tests cause data to be added to the database, it won't be in your stack to delete, so you end up with manual deletes, and END blocks that complain and crash if the script dies before the data was added. It gets messy.

Replies are listed 'Best First'.
Re^3: test clean up vs. test database
by Ovid (Cardinal) on Jul 19, 2007 at 10:01 UTC

    We tear down and recreate the test database between different test programs. On the current test setup we're using, here's the timing information for our test suite:

        Files=38, Tests=1811, 106 wallclock secs (49.47 cusr +  8.39 csys = 57.86 CPU)

    While that's certainly not as fast as some folks would like it, we tend to run individual test programs and when we're done with a particular feature, bug fix, whatever, we run the entire test suite. Previously we used other methods (rollback, mock database handles, manually deleting information, etc.), between test programs, but we ran into so many weird edge cases that going ahead and dropping and recreating the test database was far and away a huge win for us.

    Heck, even if a test died catastrophically halfway through and the test database didn't get dropped, the next test program still issues a drop command before trying to create a database, thus ensuring that we never have to worry about data spilling over between tests. It's also trivial to temporarily disable the drop on destroy command just to see if any test scripts aren't cleaning up properly, as that can often hint at bugs in code.

    Update: I should also mention that Test::Class can also speed things up quite a bit in many area. With Test::Class, if you have 30 test classes, you're still only loading the Perl interpreter once, related modules are usually only loaded once, and many test suites can speed up quite a bit if you have slow loading code. Of course, the test organization and inherited tests also great. Test::Class still needs some work, but it's pretty damned good.

    Cheers,
    Ovid

    New address of my CGI Course.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://627286]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (6)
As of 2024-03-28 16:08 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found