So I enthused about automated testing, tried to get my colleagues on side (with a degree of success), but there is one major stumbling block. The Quality Assurance team. Correctly, they're independent. Unfortunately, they don't permit automated testing unless it's done with an approved, validated (company validated that is) automated testing tool (TestDirector, for example). They're also entirely non technical, and really only concerned with the quality (in terms of change tracking, consistency etc) of documents. The net result of this is that testing is an extraordinary time overhead, and we have to think carefully about what tests to run for a given release. This means that testing is not as thorough as it could, or should be, and bugs creep through. Not as many as you might expect in this situation, but nevertheless more bugs find their way into production than I consider acceptable.
This stalemate has been going on for some time. Years actually. Then on monday something happened. A big, fat bug in some of my code showed up in production. Embarassing. This bug meant that I now have to run a manual report daily for the next couple of weeks until we can patch, to take the place of the automated report that I broke. Embarassing and irritating, especially since another bug had been emergency fixed that morning.
At that point I realised that, just like the QA people, I'd lost sight of the real issue - testing is about finding bugs, not filling in forms. If the formal, QA approved testing is less thorough than it should be, we have to make sure that the code gets properly tested some other way.
So I got to work writing unit tests with Test::More.
3 days work later I've got one of the components up to 50% test coverage and found 3 bugs in edge cases that have never showed up in production. Unfortunately we probably can't test everything this way, since the perl code is only one component, running in an embedded perl interpreter inside a proprietary application. Integration testing still needs to be done the old way, so our test overhead has gone up by the amount of effort needed to write unit tests, but at least the chances of bugs getting through is reduced.
Another advantage of testing with the Perl testing modules is the availability of Devel::Cover. Because the unit testing is informal and unvalidated, test cases can be added any time. If someone has a few minutes spare, a quick run of the test suite with Devel::Cover will show up opportunities for improving the testing.
Something else I'd lost sight of is the fact that we primarily want to test our code, not someone elses. A lot of our code depends heavily on Net::LDAP, so the need to provide a correctly configured directory server looked like a barrier to automated testing. However, end to end integration testing covers the 'get data back from the directory server' test case. If there's no directory server easily available for unit testing, we can invade the dependency's name space to let us test our own code:
use strict; use warnings; use Test::More; require 'MyCode.pl'; *Net::LDAP::bind = \&ldapbind; *Net::LDAP::new = \&ldapnew; MyCode::bindToLDAP("hostname","port","cn=binddn","password"); sub ldapnew { my $host = shift; cmp_ok($host,"hostname:port","Check that Net::LDAP::new receives t +he right params"); } sub ldapbind { my %params = @_; my %comparison = { dn => "cn=binddn", password=> "password", }; is_deeply(\%params,\%comparison,"Check that Net::LDAP::bind gets t +he right params"); }
I'm hoping I can get the vendor of the core application to give us information on externally accessing the test functions in their application via XS, so that we can extend the unit tests to include the application config. I'm not hopeful on that front, but it's worth a try.
Unfortunately testing this way also doesn't remove the requirement to do the formal testing in the old way, so the drudgery remains, but at least the code is being tested properly and the chances of embarassment are that much smaller.
One final note: in the mindless drudgery of manual testing, I'd also forgotten how much fun one can have writing tests to try and break things :-)
--------------------------------------------------------------
"If there is such a phenomenon as absolute evil, it consists in treating another human being as a thing."
John Brunner, "The Shockwave Rider".
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: The purpose of testing
by imp (Priest) on Oct 15, 2006 at 02:21 UTC | |
by Herkum (Parson) on Oct 16, 2006 at 17:15 UTC | |
Re: The purpose of testing
by chargrill (Parson) on Oct 14, 2006 at 21:47 UTC | |
Re: The purpose of testing
by dws (Chancellor) on Oct 15, 2006 at 04:35 UTC | |
Re: The purpose of testing
by revdiablo (Prior) on Oct 14, 2006 at 17:16 UTC | |
by Anonymous Monk on Oct 17, 2006 at 11:39 UTC | |
by Anonymous Monk on Oct 17, 2006 at 11:42 UTC | |
Re: The purpose of testing
by mikasue (Friar) on Oct 15, 2006 at 03:39 UTC | |
by AgileTester (Initiate) on Oct 16, 2006 at 22:51 UTC | |
by g0n (Priest) on Oct 18, 2006 at 11:58 UTC | |
by adrianh (Chancellor) on Oct 17, 2006 at 10:13 UTC | |
Re: The purpose of testing
by McMahon (Chaplain) on Oct 16, 2006 at 22:07 UTC |