in reply to My number 1 tip for developers.

Start1 by typing as much as code as you know will work

In my opinion an even better option is to start by writing a test for what you want to happen, then write the code to make that test pass. Repeat until code done.

Test Driven Development takes a little getting used to, but works very well in my experience. Advantages include:

  1. You get pretty much 100% test coverage for free.
  2. You always know when you have got a piece of code working.
  3. You always know if a new piece of code breaks something.
So code a "caller" that uses that interface. In perl modules I tend to code this as a "main' program within the same file.

You can also do this sort of thing with Test::More and friends - and have the advantage of having something you can move to a test script later on. So instead of:

package main; use strict; my $doit = doit->new(); my @data = 1..10; print $doit->enumerate( \@data );

do something like:

package main; use strict; use Test::More 'no_plan'; isa_ok my $doit = doit->new(), 'doit'; my @data = 1..10; is_deeply [$doit->enumerate( \@data )], [1..10], 'enumerate works';

Then you don't have to think about whether what's being printed out is correct.

Replies are listed 'Best First'.
Re: Re: My number 1 tip for developers.
by BrowserUk (Patriarch) on Sep 12, 2003 at 16:21 UTC

    I have yet to be convinced about the use of Test::More and it's brethren.

    I've seen signs of people using these modules within the code being tested. That means that it either has to be removed for "production purposes" or disabled. If it is removed, then you risk its removal changing the nature of the code. If it is disabled, then it means that the production version carries the weight of the testcode. Neither is satisfactory IMO.

    Using it the way you showed above, seperate from the real code, just used to simplify the validation of results makes some sense, but I'm still not sure.

    My other basic qualm is that it tends to render the testcases to a series of individually descrete steps. Essentially unit tests of a very fine grain. I prefer my testcases to be more realistic, by which I mean, I like them to be as close as is practical to a real world usage of the code under test. This tends to do somewhat more than just exercising the API under test, in that it also tends to highlight design issues. If the API has been design such that it is awkward to use, this tends to become obvious. We've all encountered APIs that may look great on paper, from the designers perspective, but that force the coder to jump through hoops in order to use them in the real world. (Anyone used Java?:).

    Coding the testcase as a realistic 'caller application' has the benefits of:

    • Highlighting design flaws or impracticalities.
    • Serves as a working--and maintained--programmers documentation example. So much better than the glorified 'prototype' examples that usually exist and often fall out of maintainance as soon as the documentation goes to press.

    I can see the benefit of using the Test::* group to provide easy notification of where & why the overall testcase fails, but I still have qualms about their more abstract effects upon the testcases.

    • How do they effect timing issues?
    • How do they intereact with threading?
    • Can I easily disable the test verification code whilst leaving the testcase code they exercise in place?
    • Or do I have to code the tested code twice so that the exercised code will still be exercised (and its results and sideeffects still be available to the next part of the overal testcase)?
    • Can I easily reuse the testcase (with its embedded but disabled Test::* code) for profiling and benchmarking purposes?

    From my breif exploratory visits to the Test::* group of modules, none that I have seen, yet satisfy these criteria.

    I still don't have a satisfactory way of ensuring that my module-embedded testcases are not loaded into memory when the module is used. I have a very hooky method, but it has many flaws. If anyone has any suggestions on how to do this I'd love to hear them.

    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
    If I understand your problem, I can solve it! Of course, the same can be said for you.