http://qs321.pair.com?node_id=1157460

davies has asked for the wisdom of the Perl Monks concerning the following question:

Chatting to a friend recently, we were discussing approaches to tests. I have always put the first test at the top of the .t file, the second test I write second and so on. But it occurred to me that this might not be ideal. Searching, I find that Test::Most has a die_on_fail setting, so my thinking is that if I were to put this option in and then write each new test at the top of the .t file, this might speed up TDD. The effect would be that writing a failing test would not result in all the passing tests having to run first, which is what happens now. The (previously) passing tests would be run only when the new (previously failing) test passed. The time gain would not be massive, but a few seconds over hundreds of iterations could well add up.

I haven't seen this documented anywhere. Is it a fascinating new insight (I don't think so), something that should have occurred to me much earlier or something so trivial that the gains aren't worth losing the train of thought that having tests in the right order exposes?

Regards,

John Davies

Replies are listed 'Best First'.
Re: Order of tests
by Athanasius (Archbishop) on Mar 11, 2016 at 17:31 UTC

    Hello John,

    If you intend to run just one test (the latest one) until it passes, why run the others at all? Conversely, if you want to run regression tests while developing a new section of code, there’s no point in “running” them in such a way as to prevent them from actually running!

    Or put it this way: once tests pass, the purpose of keeping them around is to detect when new code unexpectedly impacts your existing code base in unintended ways. When that happens, it’s usually an indication of a serious design flaw, so you want to know about it as soon as possible. Therefore, the regression tests should always run before the latest, expected-to-fail test.

    Just my 2¢.

    Athanasius <°(((><contra mundum Iustus alius egestas vitae, eros Piratica,

      I suppose the best approach is a blended one. Run the full suite at intervals but not after every tweak of the new code. The time saved by not running tests which more than likely are going to pass anyway is a real savings. You just need to balance that against how much coding and testing time (possibly the entire time between now and when you last did the full test!) you'll lose if you fail to detect a break in the older code.

      But God demonstrates His own love toward us, in that while we were yet sinners, Christ died for us. Romans 5:8 (NASB)

        This is why it's important to separate your tests into separate files for different areas of functionality. You can run a single test file to ensure you haven't broken the functionality immediately surrounding the code you're editing, but you don't need to run the full suite every time, and periodically run the whole test suite.

        That said, if you only run your entire suite periodically, the time spent figuring out which update broke a remote part of your project might be more costly than just running the whole suite on each change.

        Typically, I run my full suite on each change. On my bigger projects with dozens of test files with thousands of tests, I at minimum run the suite on each commit prior to pushing, then on each push, Travis-CI runs the gamut on 4-8 versions as well.

Re: Order of tests
by Old_Gray_Bear (Bishop) on Mar 11, 2016 at 23:09 UTC
    Back when I was working as team lead/development manager, the rule was that you wrote tests as you went along and kept them with the Source Tree in the ../t sub-directory of the component you were working on. That allowed the developer the most freedom to pick and choose the tests that they needed to run during the day.

    At 2300 the source tree was swept for changes and the Product Build kicked off. If the Build was successful, then all the /t directories were swept and executed. (The test-sweep was a separate Build, so we could schedule it apart from a Product Build.) When I showed up at 0800 the following morning, I had most of the data I needed for the 0830 Coffee and Scrum, before we split up for the day. ("OK, People. Let's go and invent the Future!")

    Since the Build took two to three hours and the Test Run took about the same, I had some slop in the nightly schedule. There was always some chat about trying to speed up the Build/Test, but so long as I had the slop time, I could successfully argue that that any 'tuning' represented false-optimization and besides, the nightly full test run was 'regression'.

    The developers were at liberty to maintain their own order-of-march for the tests. Some just kicked off the entire suite for their component when they felt the need to (or wanted to get up and get more coffee). Others just limited themselves to the one or two tests that exercised their code change and let the nightly full test run check for cross-talk and regression in other parts of the Source Tree.

    I guess I am saying that you need both kinds of test harnesses -- one that fails quickly for when you are head down in the Code, and one for regression that is very hard to kill. (Maybe call them 'Agile' and 'Timex'?)

    ----
    I Go Back to Sleep, Now.

    OGB

    ----
    I Go Back to Sleep, Now.

    OGB

Re: Order of tests
by Tanktalus (Canon) on Mar 13, 2016 at 02:38 UTC

    To be honest, it works both ways. Avoiding old tests that were passing to see when the new test passes may result in getting the new test passing even though you broke an old previously-working test three hours ago.

    I generally try to build up my tests. Early in the test file tests things that later tests require. Testing that I never break fundamentals may not save me much time, but it sure saves me a lot of headache.

    Also, I find that the more fundamental the test is, the more isolated it likely is. Things like standalone modules, code that doesn't rely on networking and external devices, that sort of thing. I find my fundamental tests are also usually my fastest, and that peace of mind in ensuring I don't break them is worth the extra couple seconds it takes the tests to run.

    In practice, though, I'm usually somewhere in the middle. I put my fundamental tests in the 0[0-9]-* test files, and build up from there. Then, when I'm working on higher-level code, which relies on those low-level fundamentals, in the 5* range or whatever, I'm not usually making changes to the low level code, so I can bypass those by simply running "prove t/5*".