Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

Testing: Fun for the family

by BUU (Prior)
on Mar 09, 2004 at 10:02 UTC ( [id://335046]=perlmeditation: print w/replies, xml ) Need Help??

After constantly reading about the joys of testing, I finally decided to buckle down and implement some tests for a piece of code I'm working on. After I got the first couple of tests written, I was hooked! I had to keep writing more tests. It was fun seeing exactly what contortions I had to do to test various bits of my application.

The pay off was immediate it. Soon after getting a basic test suite in place, I noticed that I had a spare file in my testing directory. I was pretty sure I didn't need it, but not totally. So I simple moved the file away and re-ran my test suite. Everything passed. Now I can delete the file and *know* I don't need it.

That's really what testing buys you. Not some vauge buzzword compliance or some random restriction, but security. Real peace of mind as you can make any change and *know* everything still works as it should.

For me, automated testing was really the obvious next step. I came to realize that I had been doing my own sort of informal testing. Everytime I made a change to my app, I would load it up and navigate through some of the screens, making sure it worked. But automated testing far surpasses this. With one press of a button I can instantly know that *everything* still works, not just the bits I remembered to test.

However while I was coding my tests, it occurred to me that what I was really doing was translating my project specification from a high level language (english) to a lower level one (perl) and in doing so creating something that could be actually used to check functionality. Which made me wonder, what about a way to check functionality without requiring this translation?

At first blush, it seemed rather simple. The majority of my tests were simple things, "ok($foo=~/bar/)" which you could easily translate to english (or keep in english, if you're thinking that way). Simply: $foo should contain bar. and you have an english statement that does the same thing as perl.

But thats not really much of a win, you're simply replacing a bit of perl syntax with the exact equivalent in english, and you still need the code to get the value in to the variable in the first place. You'd also need to repeat your self to test for all of the basic assumptions/restrictions you place on your product, such as what a method should return for no value and things of that nature.

At the moment I can think of no real way to translate between all of the high level assumptions we make when we're describing the project specification, sense so many of these are context sensitive in various ways, I would need a English->Perl translator, which would probably put programmers out of a job if such a thing could actually be written. Anyone else have any ideas on describing the project spec in a higher level / more natural language but still being able to be used as a test?

Replies are listed 'Best First'.
Re: Testing: Fun for the family
by Corion (Patriarch) on Mar 09, 2004 at 10:17 UTC

    The functions that Test::More exports are already somewhat closer to "english", as it exports is ("smart" equality), like (regex matching), and even is_deeply (equality for arrays).

    I guess an "informal" specification could to go a route of first defining Perl equivalents for some expressions and then simply translating some simplicistic sentences into Perl, possibly like the following:

    =for test_description Description: The foo parameter contains valid data. Definitions: The foo parameter is stored in $foo. Criteria: The foo parameter is not empty and The foo parameter is defined and The foo parameter matches the pattern C</bar/>. =cut

    This could ("easily") be compiled into the following Perl code:

    my %test; my $foo; $test{Description} = qw(The foo parameter contains valid data); # The foo parameter is stored in $foo. $test{access} = [ \$foo ]; # The foo parameter is not empty and for my $t (@{$test{access}}) { isnt( $t, '', $test{Description} . qw(The foo parameter is not empty)) +; # The foo parameter is defined and is( $t, defined, $test{Description} . qw(The foo parameter is defined) +); # The foo parameter matches the pattern C</bar/>. like( $t, qr/bar/, $test{Description} . qw(The foo parameter matches C +</bar/>)); };

    As long as you restrict yourself to this simple vocabulary, this would do the job, and possibly you will want to create "macros" that allow you a higher-level description of the specifications. But I think that describing the single tests in english dosen't buy you much in the sense of readability and understanding, as it dosen't describe the "top view" but only the single small steps towards the main goal.

Re: Testing: Fun for the family
by adrianh (Chancellor) on Mar 09, 2004 at 10:38 UTC
    At first blush, it seemed rather simple. The majority of my tests were simple things, "ok($foo=~/bar/)" which you could easily translate to english (or keep in english, if you're thinking that way). Simply: $foo should contain bar. and you have an english statement that does the same thing as perl.

    Instead of using English, I'd take advantage of the most appropriate test function and use a test name for clarity. So your example could be written:

    like( $foo, qr/bar/, 'foo contains bar' )
    At the moment I can think of no real way to translate between all of the high level assumptions we make when we're describing the project specification, sense so many of these are context sensitive in various ways, I would need a English->Perl translator, which would probably put programmers out of a job if such a thing could actually be written. Anyone else have any ideas on describing the project spec in a higher level / more natural language but still being able to be used as a test?

    You can always use Perl to make higher-level tests. I'm always writing little subroutines in test suites that capture a higher level testing concept.

    Can you give an example of a higher level requirement that you think might be problematic?

      You can always use Perl to make higher-level tests. I'm always writing little subroutines in test suites that capture a higher level testing concept.
      Of course you can write higher level tests in perl. What I was trying to get at was writing higher level tests in a language thats a higher level then perl (such as english or some subset thereof).

      Maybe it's not possible and maybe even if it was possible theres no real win. After thinking about it for a while it occurs to me that such a system would be very close to a programming-language-as-english, which doesn't exist now and despite several attempts doesn't appear to be coming any time soon.
        After thinking about it for a while it occurs to me that such a system would be very close to a programming-language-as-english, which doesn't exist now and despite several attempts doesn't appear to be coming any time soon.
        And I don't believe it will ever come. The problem with every day English is that it isn't precise enough. Look for instance at math/physics in the 17th century. Why did it got such a boost at that time? Because then they moved away from formulating problems and solutions in natural languages (Latin, English, German, French) and started using math symbols.

        It's the same with programming (and hence testing), you need to be accurate in what you describe. Hence, programming languages.

        Abigail

        What I was trying to get at was writing higher level tests in a language thats a higher level then perl (such as english or some subset thereof).

        Sorry. Taking a firm grasp of the wrong end of the stick as usual!

        As you correctly point out what this boils down to is just-another-programming-language :-)

        You might be interested in taking a look at FIT - a slightly different approach to testing, especially at the higher acceptance test levels .

Re: Testing: Fun for the family
by dragonchild (Archbishop) on Mar 09, 2004 at 13:49 UTC
    With one press of a button I can instantly know that *everything* still works, not just the bits I remembered to test.

    Everything you remembered

    • to build a test for
    • to maintain a test for
    • to build data to test with
    • to maintain data to test with

    Automated testing is definitely a step forward, but it most definitely is not the final step. In most organizations, testing is a fulltime job. I worked in a group of testers for a year, supporting their test tool(s). 25 EE's, many with advanced degrees, and they tested the work of 40 developers. And, this was just the sub-system testing. There was an entire team of 30+ doing regression testing ... for just this sub-system. And, another team of 10-12 doing sub-system testing for another version of this product.

    Personally, I think that forcing developers (especially web-app developers) to be the DBA, BA, developer, tester, and training materials writer is a little ... much. :-)

    ------
    We are the carpenters and bricklayers of the Information Age.

    Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.

Re: Testing: Fun for the family
by hardburn (Abbot) on Mar 09, 2004 at 14:17 UTC

    I've been trying to discipline myself of late to always write my tests first. In a personal project, I found two things: first, I can see the code working right away, and second, I tend to write code in such a way that it's easy to write tests for.

    The first is important on code I'm not being paid to write, because I have a tendancy to write all the interesting portions of the code and then stop. I can't tell you how much 3-5 year old code I have in my projects directory that won't even compile because I gave up on it.

    The second is really interesting. I often find that the code that is easy to test also happens to compartmentalize into subroutines well, so it ends up encouraging good design anyway. Certainly a case of constructive laziness.

    Have other monks that use Test Driven Development found the above to be true?

    ----
    : () { :|:& };:

    Note: All code is untested, unless otherwise stated

      I've found this to absolutely be the case. I only started writing tests for code I was working on about 9 months ago, and I'm a believer. For code I've written tests for, I no longer dread re-opening it 3 months later. Happened this morning, in fact - I had to fix ambiguous behavior in a core module, and I knew the tests (not only of that module, but the entire system) would help protect me from doing something seriously bad.

      Testing has worked wonders for me when it comes to API developement - especially when I know other people are going to have to work with the API I'm developing. Writing tests forces you to run your proposed API through its paces before anyone else takes a look at it. Maybe it's not tuned to race yet, but at least you'll know all the wheels are bolted on and the engine runs. :)

      Check out Test Driven Development (amazon link) by Kent Beck. All the examples are in Java, but it's a good (fast) read on how to approach coding in this fashion - through heavy testing, coding, refactoring, and testing again.

      As others have said, testing isn't a magic bullet. I view it kind of like safety rigging in construction work - you could get by without it, but would you really want to?

Re: Testing: Fun for the family
by jbodoni (Monk) on Mar 11, 2004 at 05:18 UTC
Re: Testing: Fun for the family
by grantm (Parson) on Mar 11, 2004 at 20:59 UTC

    Once you've caught the testing bug, you can take things to the 'next level' by releasing your code to CPAN. Then the CPAN Testers team will introduce you to the joys of cross platform testing issues.

    I've learnt numerous lessons the hard way (many of which in retrospect were pretty bloody obvious), including:

    • don't assemble pathnames using '/' - use File::Spec
    • don't assume everyone has a recent version of Perl
    • be careful that your tests don't make any assumptions about the order of keys in a hash
    • don't assume any relationship between the current system time and the timestamp on that file you just created (the fileserver time may be different)
    • make sure you know which version of other modules you depend on - you don't want their old bugs to break your test suite
    • don't assume anyone will read the README - try to integrate all dependency checking into your build process (ie: before the tests run)
    • just because a module you require is installed don't assume it works (there are a surprising number of people who watch 100% of tests fail, run make install anway and congratulate themselves on a job well done)
Re: Testing: Fun for the family
by xdg (Monsignor) on Mar 11, 2004 at 16:09 UTC

    Reading about the "joys" of testing -- and the addictive quality of knowing when you've got things working the way you expect, I just have to mention Devel::Cover, which I was alerted to in a TPJ article recently. Devel::Cover tracks how much of your code has been exercised by your tests and produces a fantastic HTML report that highlights in green and red areas that were tested and untested. In effect, it tests your tests. It includes:

    • subroutines
    • statements
    • branches
    • conditions
    • pod (if Pod::Coverage is installed)

    It's still alpha but works reasonably well.

    -xdg

    Code posted by xdg on PerlMonks is public domain. It has no warranties, express or implied. Posted code may not have been tested. Use at your own risk.

Re: Testing: Fun for the family
by sfink (Deacon) on Mar 11, 2004 at 20:17 UTC
    I have been doing a lot of test-driven development recently, and so far, I have to say it's a mixed bag.

    It is enormously valuable to be able to have an actual demonstration of the correctness of a subsystem, rather than relying on gut feel and developers' memories of what they have "tested". It is pretty much impossible to have any confidence in a release unless you have the tests to show that at least a subset of the functionality is fully working -- without the tests, it's amazing what can be broken and still appear to work for the things that have been thrown at it so far.

    However, and perhaps this is more a reflection of me than the methodology, but I have not had great experiences with using test-driven development to drive the design. Whenever I start out by writing the tests and then making the design work for as many of those as I can, I end up with a simplistic design that:

    • works very well for the core tests that I was most concerned about
    • adequately for the remaining tests that weren't such a high priority
    • and inadequate for the various new bits of functionality that inevitably get added to the requirements as the project progresses (i.e., the new tests that weren't written when the design started).
    I always seem to end up in the same cycle: I get everything working and passing all or almost all of my current tests, then either get a new requirement or just expand the project to include more of the pieces of the overall spec. I then discover that some fundamental piece of my design needs to be changed slightly. Theoretically, this should be no big deal since I have the test cases to allow me to change things drastically without worrying about breaking something subtle. In practice, that is true -- but I end up spending far too much time making the change, fixing things up so that all the old tests work, then making use of the change to implement the new tests. Lather, rinse, repeat.

    I find that if I step away from the details and create the design, and even part of the code, based on a wider view of what the system needs to do know and might need to do in the future, that I end up with a much more robust system that can pass the tests and can deal with the inevitable late-stage additions.

    This opinion is probably not too popular with the current conventional wisdom about agile development processes or whatever, but I'm just reporting what my recent experience has been. I am more productive with initial design then initial coding then tests then final coding. And again, this may just be due to defects in the gooey stuff between my personal pair of ears, but I'm wondering if others have experienced the same.

      I am a big fan of TDD -- and I agree with sfink's post. In particular,
      I then discover that some fundamental piece of my design needs to be changed slightly. Theoretically, this should be no big deal since I have the test cases to allow me to change things drastically without worrying about breaking something subtle. In practice, that is true -- but I end up spending far too much time making the change, fixing things up so that all the old tests work, then making use of the change to implement the new tests.
      Right on.

      The refactoring and XP folks make a big distinction between refactoring (improving the design of existing code) and adding new functionality. And often it is claimed you can refactor safely because 'your tests protect you'. Agreed, but like sfink, I find refactoring requires you to recode (often nearly all) the tests as well.

      This recoding is well worth it, of course. And TDD as a whole is very much worth it. But still, the tests -- necessarily so -- are very dependent on the specifics of the code tested. I find my tests break and need rebuilding (not just fail, and require the module to be changed) when the module goes through large revision.

      My current testing challenge is adding application or user tests -- so far I've been living the easier world of unit tests.... app tests (particularly in a web environment) take more effort. I'm plan is to continue using extensive unit testing to confirm each module behaves as it should, then confine my app tests to one or two simple basic run-thrus of the web app's main funtion. I've noticed when the apps break, it is no longer due to the objects (which are well tested), but to the thin (Mason) glue holding them together. That said, I've not done much app test writing yet, and (unklike unit tests), can't seem to get into the rythm of writing app tests.

Formal methods
by sleepingsquirrel (Chaplain) on Mar 09, 2004 at 18:11 UTC
    With all this talk about high level testing specifications, I couldn't help but think about formal method specifications like Z. Anyone here using formal methods to create bullet-proof perl apps? I havn't used them myself, but I found this paper interesting.
Re: Testing: Fun for the family
by petdance (Parson) on Mar 10, 2004 at 03:32 UTC
    Are you aware of the prove utility that is included with recent releases of Test::Harness? It allows you to run a specific set of *.t files through Test::Harness without having to do a full-blown make test

    xoxo,
    Andy

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://335046]
Approved by Corion
Front-paged by matija
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others cooling their heels in the Monastery: (3)
As of 2024-04-24 22:09 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found