http://qs321.pair.com?node_id=578279

Testing has been a bit of a problem for me over the last few months, as some will have heard from my CB rants. Not the 'make test and watch the TAP output come pouring out' type of testing - oh dear no. For much of the last 8 months, I've been doing a great deal of manual testing, of the 'type these commands, compare 50 fields in the database output with the 50 fields printed in the test script, putting a p (pass) or f (fail) next to each one, sign and date, then go on to the next one of the 150 tests' type. Yuk! Human beings did not evolve to do this kind of thing.

So I enthused about automated testing, tried to get my colleagues on side (with a degree of success), but there is one major stumbling block. The Quality Assurance team. Correctly, they're independent. Unfortunately, they don't permit automated testing unless it's done with an approved, validated (company validated that is) automated testing tool (TestDirector, for example). They're also entirely non technical, and really only concerned with the quality (in terms of change tracking, consistency etc) of documents. The net result of this is that testing is an extraordinary time overhead, and we have to think carefully about what tests to run for a given release. This means that testing is not as thorough as it could, or should be, and bugs creep through. Not as many as you might expect in this situation, but nevertheless more bugs find their way into production than I consider acceptable.

This stalemate has been going on for some time. Years actually. Then on monday something happened. A big, fat bug in some of my code showed up in production. Embarassing. This bug meant that I now have to run a manual report daily for the next couple of weeks until we can patch, to take the place of the automated report that I broke. Embarassing and irritating, especially since another bug had been emergency fixed that morning.

At that point I realised that, just like the QA people, I'd lost sight of the real issue - testing is about finding bugs, not filling in forms. If the formal, QA approved testing is less thorough than it should be, we have to make sure that the code gets properly tested some other way.

So I got to work writing unit tests with Test::More.

3 days work later I've got one of the components up to 50% test coverage and found 3 bugs in edge cases that have never showed up in production. Unfortunately we probably can't test everything this way, since the perl code is only one component, running in an embedded perl interpreter inside a proprietary application. Integration testing still needs to be done the old way, so our test overhead has gone up by the amount of effort needed to write unit tests, but at least the chances of bugs getting through is reduced.

Another advantage of testing with the Perl testing modules is the availability of Devel::Cover. Because the unit testing is informal and unvalidated, test cases can be added any time. If someone has a few minutes spare, a quick run of the test suite with Devel::Cover will show up opportunities for improving the testing.

Something else I'd lost sight of is the fact that we primarily want to test our code, not someone elses. A lot of our code depends heavily on Net::LDAP, so the need to provide a correctly configured directory server looked like a barrier to automated testing. However, end to end integration testing covers the 'get data back from the directory server' test case. If there's no directory server easily available for unit testing, we can invade the dependency's name space to let us test our own code:

use strict; use warnings; use Test::More; require 'MyCode.pl'; *Net::LDAP::bind = \&ldapbind; *Net::LDAP::new = \&ldapnew; MyCode::bindToLDAP("hostname","port","cn=binddn","password"); sub ldapnew { my $host = shift; cmp_ok($host,"hostname:port","Check that Net::LDAP::new receives t +he right params"); } sub ldapbind { my %params = @_; my %comparison = { dn => "cn=binddn", password=> "password", }; is_deeply(\%params,\%comparison,"Check that Net::LDAP::bind gets t +he right params"); }

I'm hoping I can get the vendor of the core application to give us information on externally accessing the test functions in their application via XS, so that we can extend the unit tests to include the application config. I'm not hopeful on that front, but it's worth a try.

Unfortunately testing this way also doesn't remove the requirement to do the formal testing in the old way, so the drudgery remains, but at least the code is being tested properly and the chances of embarassment are that much smaller.

One final note: in the mindless drudgery of manual testing, I'd also forgotten how much fun one can have writing tests to try and break things :-)

--------------------------------------------------------------

"If there is such a phenomenon as absolute evil, it consists in treating another human being as a thing."
John Brunner, "The Shockwave Rider".

Replies are listed 'Best First'.
Re: The purpose of testing
by imp (Priest) on Oct 15, 2006 at 02:21 UTC
    Automated testing is one of my favorite parts of the development process. It is also one of the most important in my opinion - on the same level as revision control and documentation.

    Developers often comment that automating testing is important, but that it is not really an option for the project they are working on. I am convinced that this is false is almost all cases.

    One common reason against testing is that the code in question is too complex for a unit test to verify. The only part of that sentence that I find correct is that the code is too complex. The code in question should likely be refactored into a few objects/functions that are tightly cohesive and loosely coupled. This sort of design is much easier to verify, and it reduces repetition.

    Another common assertion is that there isn't enough time before the deadline to write the code AND the test suite. This is only true in cases where you are adding the test suites at the end of the development cycle. Instead they should be developed alongside the code, or even better, before the code.

    It's common for developers to run the same block of code many times, manually verifying the debug output each time. Sure, it only takes 45 seconds to verify that way - but it's 45 seconds every time you change the algorithm, and it adds up quickly.

    And if you change the algorithm you now need to verify that it behaves correctly in all of the edge cases, as well as in some known problem sets. Whereas if you keep the same set of tests from the previous implementation you can run them after optimization and know instantly whether it still behaves according to specifications.

    Developing the tests before the code is implemented is another huge time-saver. How many times have you implemented a large portion of a module before realizing that the way you had planned to use it just isn't feasible, and you now need to change the interface? At this point you need to assume that all of the inner workings need to be verified again.

    To avoid some of that pain you should write the test cases before the implementation. This allows you to write the code from the point of view of a user of this module, which often points out flaws in your plan. You can then incrementally develop the module, and modify your tests whenever you discover a new requirement. This also encourages highly cohesive code.

    Another benefit of test driven development is that your system grows at a steady and predictable pace, because bugs are detected and fixed early in the cycle instead of at the end... particularly during the optimization period.

    And perhaps the most important reason to develop automated testing is to prevent bugs from reappearing. News bugs will irritate clients, but the reappearance of an old bug will infuriate them. To avoid this embarassing situation you should always write a test case that fails because of the bug in question before fixing the bug. Now all you have to do is run your entire test suite before each release, and you should be safe.

      I wanted to point this out for people who say "the code is too complex for a unit test to verify," Don't kid yourself, you have written some really bad code.

      Good code design is easy to implement and easy to test. If your code is too complicated to do this easily then you have going to have bugs and probalbly can point the finger at a bad design.

      If you are having problems written simple tests, then you need to think about how your code works and make it easier to write and understand.

Re: The purpose of testing
by chargrill (Parson) on Oct 14, 2006 at 21:47 UTC

    g0n++, testing++.

    After a long break from being a professional developer, I'm finally back doing what I like. But what really excites me, is that I've managed to convince several others around me to break as much code as possible by writing tests. 160+ tests for three little (very small!) modules of mine, and I've uncovered bugs that I wouldn't have caught otherwise... "What happens if I try to set X to Y?"

    It looks like perhaps you've gotten along without needing it, but I'd like to mention Test::MockObject, by our very own chromatic. I'm getting ready to test my modules from a higher level (they're utility modules for a database-accessing mod_perl app) and without Test::MockObject, I'm having trouble envisioning how to give them a proper thrashing. In case you'd like to see a practical example, Jason Gessner gave a talk at YAPC::NA 2006, and his slides are available online. Test::MockObject shows up around page 25.



    --chargrill
    s**lil*; $*=join'',sort split q**; s;.*;grr; &&s+(.(.)).+$2$1+; $; = qq-$_-;s,.*,ahc,;$,.=chop for split q,,,reverse;print for($,,$;,$*,$/)
Re: The purpose of testing
by dws (Chancellor) on Oct 15, 2006 at 04:35 UTC

    Unfortunately, [the Quality Assurance People] don't permit automated testing unless it's done with an approved, validated (company validated that is) automated testing tool (TestDirector, for example). They're also entirely non technical, and really only concerned with the quality (in terms of change tracking, consistency etc) of documents.

    One approach I've seen to problems like this is to translate the problem out of technical-speak and into dollar-speak.

    People high up an organizational food chain tend to think more in terms of dollars and risk than in terms of how things get done. So while the Quality Assurance folks might cling to TestDirector, there's probably someone higher up the org chart to whom the word "TestDirector" is merely a technical buzz word that has a dollar figure associated with it.

    The argument to make to such people, if you can get their attention (which may be difficult depending on the organization), goes something like "We're spending the same N-thousand dollars of people time over and over doing repetitive manual testing. Over a quarter, that costs us (some big number). By investing 6N-thousand to automate (using the tool the QA folks prefer), we save (some big number) over the course of a year. If we use a more appropriate tool, we only spend 3N-thousand, and save (some bigger number).

    Adding "... and that means more money for executive bonuses" is occasionally necessary, though it's a phrase best reserved for times of true need. :)

    Note in particular the absence of the phrases "TestDirector", "Unit Testing" and "Perl" in this approach. To the people you're trying to reach, these words might just cause a buzzing sound in their ears.

Re: The purpose of testing
by revdiablo (Prior) on Oct 14, 2006 at 17:16 UTC
      What I would really love to see is test engineer recognizing the importance of scripting languages and automation. Recently I came across Testing Geeks, though they do not have any information on automation as of now, but looking at the other content and organization, I will assume that it will be there soon. Hope to see more sites like this.

      Link fixed by GrandFather

        Link is not working in the previous post.It should be www.TestingGeek.com
Re: The purpose of testing
by mikasue (Friar) on Oct 15, 2006 at 03:39 UTC
    ++ g0n!

    I am a Quality Assurance professional and I totally agree with you that testing is not about filling out defect reports but about finding bugs before they hit production. It is not only embarrassing to the development team but the QA team as well. That is why I like to create test cases from the very first draft of the specification. I think that QA should happen as the BA is writing the spec. Testing for logic as well as design before it even hits development would make unit testing a lot easier.

    QA People at your place of work may be non-technical but I dont' think this is the norm. I QA by profession but I code in my personal time. I think this helps me to write better test cases because I think as a developer not as a "non-technical" QA person.

    I have used Test::More to automate some tests before but I like manual testing. It allows me to get personal with the code and the application i'm testing. Automated testing is very impersonal and quick like a one-night stand :-). I like to get to know the application i'm testing even before I can see it.

    Very nice meditation!

      I wish everyone would get out of the bad habit of referring to testers and test teams as 'QA'. Generally, we are doing testing. We might be doing quality control. Quality assurance is a whole different animal. Evidently at some point, someone decided the words 'test team' or 'tester' don't sound important enough. I'm proud to be a tester (although ashamed of some in my profession, such as the team described in this post). I work on a team where everyone including the programmers shares responsibility for quality, testing and test automation. We're very happy with the results. As for test teams needing to be 'independent', that's silly. A good tester provides information about the application. It doesn't matter who we report to. We aren't going to be somehow insidiously influenced to not find issues because we have a collaborative relationship with the programmers. More communication and collaboration would go a long way with most projects.
        For the record, when I said QA, I meant QA, in the project management sense of the term. In our environment we don't have a separate testing function, developers test each others code (but never their own). QA should absolutely be independent, to ensure that political expendiency cannot override fundamental quality standards. On a different project I worked on (for a different organisation) without an independent quality function, testing could be (and frequently was) almost entirely dispensed with by management edict in order to meet a deadline. The result was disastrous, except for the project manager who was only judged on meeting the deadline, not the quality of the final deliverables. An independent quality function is there to ensure such ill advised deviations from procedure don't happen.

        --------------------------------------------------------------

        "If there is such a phenomenon as absolute evil, it consists in treating another human being as a thing."
        John Brunner, "The Shockwave Rider".

        As for test teams needing to be 'independent', that's silly. A good tester provides information about the application. It doesn't matter who we report to. We aren't going to be somehow insidiously influenced to not find issues because we have a collaborative relationship with the programmers. More communication and collaboration would go a long way with most projects.

        ++

Re: The purpose of testing
by McMahon (Chaplain) on Oct 16, 2006 at 22:07 UTC