|Perl: the Markov chain saw|
Followup to: How to test different back ends?by skazat (Chaplain)
|on Jun 07, 2007 at 04:58 UTC||Need Help??|
skazat has asked for the wisdom of the Perl Monks concerning the following question:
I didn't find a very good answer to my seeking at:
But did come up with an OK ideas that's not too too much playing with dragons.
To recap: I'm trying to make some automated tests for an app that has different options for saving information in the backend.
In my perlcode, these different backends are put together by having a module that holds the shared methods to the object that represents whatever I need the backend for, and then separate modules that hold the backend-centric methods for each of the backends.
the problem with testing these types of backends, is that the backend-centric methods are added at compile time not runtime, since my module that holds the shared methods loads the backend-specific stuff with a:use base qw(App::Backend::ThisBackend);
So, in my testing suite, I now have a few files, instead of one for this specific part of the program.
One is called, "backend.pl" (or, whatever)
And then a test file for each of the different backends.
Those test files basically look like the following. This first example is for a backend that doesn't need anything to be setup, so we just explicitly set the type of backend we're trying to test, in this case a Berkeley DB-type backend:
Since my backends also use the App::Config and App::Config is in the %INC hash, it won't get imported again, and the variable will be set for them, by this test file.
This is the only part where I'm playing with dragons, since it's not *really* my policy or suggestion that you allow your app to arbitrarily set program global configuration variables, but it does come in handy.
To activate the different backends, I can just add whatever I need to make these different backends work in these test files. For example:
So basically, the actual tests are imported with do() and the test files themselves just set up the environment.
Since the backends basically have the same API, the tests are the same. Testing all the backends makes sure the different implementations don't get out of sync with each other. This will hopefully stop me from getting so many headaches in the future :)
So that was my solution. Perhaps it could be used as a starting point/pattern for someone else,