http://qs321.pair.com?node_id=496867

Usually, I first write my code, then write my tests. I've been doing that for years, and IMO, I usually churn out decent code.

Recently, for a small project, I decided to try out 'test driven development', first write the tests, then write the code, tweaking it till all tests pass.

There was a hairy section of code - after writing it, I ran it. Some of the corresponding tests failed, and I changed the code. Run it again, and voila, the tests passed. After finishing the rest of the code, Devel::Cover gave me a 100% coverage, and Test::More gave me 100% success. So, I handed it over for some in the field testing.

The code came quickly back to me. Remember the hairy code I changed to pass the tests? Well, it was correct after all, and I broke it. The tests turned out to be wrong.

Someone write me a Test::Test.

Perl --((8:>*

Edit by castaway - closed tt tag

Replies are listed 'Best First'.
Re: A danger of test driven development.
by jmcnamara (Monsignor) on Oct 03, 2005 at 12:01 UTC

    I find that test driven design give a genuine boost to my productivity. However, your post touches upon a minor worry that I have about it.

    I think that with test driven design it is possible to write code to pass tests rather than code that fulfils the intended purpose. The presence of a safety net may make you a little careless on the high-wire.

    I'd still strongly advocate test driven design but I think that it is worth remembering that code is written for a purpose and that purpose isn't to pass tests.

    --
    John.

      I think that with test driven design it is possible to write code to pass tests rather than code that fulfils the intended purpose. The presence of a safety net may make you a little careless on the high-wire.

      Yeah, I see that sometimes. It often comes from people thinking they're finished when all the tests pass, rather than when they can't write a failing test.

      I think that with test driven design it is possible to write code to pass tests rather than code that fulfils the intended purpose.

      But how does one verify the intended purpose? How does one ensure that the code fulfils that purpose? How does one ensure that later changes to fulfil some other purpose don't interfere with purposes already satisfied?

      The idea behind TDD is that the tests become the only real specification that matters because they are the only ones that are written formally at the level of code. Yes, it's possible to mis-write a test. It's also possible to mis-write code. Nothing can prevent one from mis-interpreting a poorly-written (or even a well-written) specification from time to time.

      That said, good test technique should focus on "black-box" testing -- abstracting implementation away and reducing tests to inputs and outputs that can be mapped to requirements. Given some interaction or input, the code produces some result or output. It's harder (though not impossible) to get that wrong. Also, the process of writing tests reveals weaknesses or lack of clarity in the requirements that might otherwise be missed.

      TDD isn't a game for getting tests to pass; test coverage of 100% doesn't prove correctness. But TDD shifts the burden of proof from the developer to the test-writer, who has responsibility for translating user requirements into a verifiable specification (the test) with fidelity. And isn't poor requirements one of the things we hate? TDD reveals that up front, before the coding is done, instead of after when efforts wind up wasted.

      -xdg

      Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Re: A danger of test driven development.
by dragonchild (Archbishop) on Oct 03, 2005 at 11:19 UTC
    Sounsd like you didn't read all of the TDD / XP documentation. Whenever two pieces of code disagree, verify that the simpler one is correct (cause it's simpler to verify), then check the more complex one. This is kinda like checking to make sure the computer is still plugged in when the monitor mysteriously goes blank.

    I'm also not sure you were truly doing TDD when you have code you changed to pass the tests. You shouldn't have to change code to pass tests - the code should be written literally line by line to pass tests you wrote 5 minutes earlier. I literally write can_ok( $CLASS, 'foo' );, watch that fail, then write sub foo {}, watch it pass, then write the first test for foo().

    It sounds like you wrote too much untested code at once.


    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
      "You shouldn't have to change code to pass tests"

      Huh? Then I guess that you have to change your test to let your code pass ;-)

      Whenever two pieces of code disagree, verify that the simpler one is correct (cause it's simpler to verify), then check the more complex one.

      I did, and the test looked ok.

      You shouldn't have to change code to pass tests

      Really? You never change code? All the code you write is correct, always? You never make typos, swap arguments, popped instead of shifted? Lucky you, you don't need any tests!

      It sounds like you wrote too much untested code at once.

      No, I didn't. I did write enough chunks of code between test runs to actually have compilable, no non-sense code though.

      Perl --((8:>*
        I obviously didn't make myself understandable. The point is that you write the test, write the code that passes the test, then that piece of code is done until the spec changes (or you refactor). You shouldn't have code that didn't have a test on it so that you write another test and find yourself having to fix code.

        The point is that while you might have written tests before code, you wrote code that wasn't tested until you wrote the test for it. I think your idea of "no non-sense code" is unreasonable. About half the code I write between runs is what I think you would consider "non-sense" code. A lot of time, I will have code that I know is wrong because I haven't written the test that demonstrates its "wrong"-ness. Until I write the test that demonstrates what's wrong, forcing me to refactor, I don't refactor. Otherwise, I would have written code before that code's test, which means I'm not doing TDD anymore.

        Now, I don't ship code in that state (though I do check it in). The test suite doesn't fully implement the spec, so the feature isn't done. But, you literally write the minimalist and simplest code that will work. Unoptimized, unvalidating code. Crappy CS101 code. You'll write the tests that expose the weaknesses, forcing the refactor to correct code. But, only with tests.


        My criteria for good software:
        1. Does it work?
        2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
Re: A danger of test driven development.
by tirwhan (Abbot) on Oct 03, 2005 at 12:49 UTC

    Hmm, maybe it's just me, but I fail to see how this problem is an artifact of TDD.

    1. Write tests first:
      1. Write broken test
      2. Write correct code
      3. Run test, which fails
      4. Change code, which breaks it
    2. Write code first:
      1. Write correct code
      2. Write broken test
      3. Run test, which fails
      4. Change code, which breaks it

    So unless you're doing something different in 2.1 than in 1.2 (like for example eyeballing the code more thoroughly or running it manually, which is a kind of "test" in itself) you'll end up with the same result, I don't see how you can fault the methodology for this.

    TDD does not guarantee that you produce exclusively working code. It's just a methodology which highlights errors more quickly and makes it less likely for broken code to appear in production (and IMO also gives you a better approach at code design for free)

Re: A danger of test driven development.
by cbrandtbuffalo (Deacon) on Oct 03, 2005 at 12:38 UTC
    Another antidote to tricky tests or code is another pair of eyes. If I have two contradictory pieces of information that I know are both correct (the code and the test, but they disagree), I'll often grab someone else and have them take a look. If you have a spec you're working from, you can also have someone take a quick look at the spec and test and see if they agree that the test is doing what you think.

    We also have code reviews for all new code and we often find tests and code that are incorrect.

    It's true that the passing test can give you a false sense of security. Having someone else review things can help.

      True, however, I wasn't convinced both pieces of code were correct. I thought the test was correct, and since I just wrote the code, I assumed the code was incorrect. As for another pair of eyes, I've many coworkers, and they all are good coders, but they're all fluent in a subset of shell, awk, C, Java, .NET, Python, FORTRAN, Dingo, and C++.

      Good advice, but it wouldn't have helped me.

      Perl --((8:>*

        Tell that to the teddy bear. Literally, explain the code to a teddy bear or anyone who happens to be passing. Just the act of talking a problem through often clarifies the issue. If your cow-orkers have even a little programming experience that will help a lot.

        Not telling the teddy bear is not an excuse.


        Perl is Huffman encoded by design.
        A reply falls below the community's threshold of quality. You may see it by logging in.
Re: A danger of test driven development.
by leriksen (Curate) on Oct 03, 2005 at 13:21 UTC
    I think it was Knuth who said "Every software problem can be solved with more, or less, abstraction. Experience tells use which direction is correct."
    (if it wasnt Knuth, I'm sure a wiser monk will correct me oh so gently....)

    Perhaps for TDD a corollary is
    "Every failed test requires a change to the test, or to the code. Knowledge of the requirements tell us which choice is correct."

    For some reason I have an overwhelming desire to say these with a chinese accent, and finish them with ", grasshopper"

    ...reality must take precedence over public relations, for nature cannot be fooled. - R P Feynmann

Re: A danger of test driven development.
by BrowserUk (Patriarch) on Oct 03, 2005 at 17:42 UTC

    I think the problem with TDD is essentially the same problem as with any other development methodology, that of overemphasis of one aspect of the development process with respect to other aspects. When the tests become more important than the code being tested, you have a problem.

    However, this problem is not a fundemental problem with the methodology, it is a problem with either the implementation or the day to day execution of it. Or both.

    Just as an overzealous adherance to OO doctrine can lead to the disguising of an inherently global entity behind a design pattern that trades a the simple global for a complex one with the consequence of greater complexity and no real benefits. So, an overzealous adherence to TDD can lead to the situation where the test suit takes on a greater importance than the code under test.

    One of the signs of this overzealous application of a methodology is when the important part--ie. the code--starts to be designed around or altered to accomodate the methodology. Another is when the final purpose of the code becomes subordinated by the goals of the methodology. When "100% tests passed" or "100% code coverage acheieved" goals take a higher priority than "the code functions within specification", the real goal has been lost and the problem has arisen.

    It is also important to realise that fixing the problem means adjusting the methodology or it's implementation, not throwing it away in favour of some new "magic bullet".

    If there is one lesson that history can teach us with respect to code development, it's that being taken in by the promises of the latest, greatest, buzzword-complient paradigm to the exclusion of previous hard-won experience and common sense, invites history to bite us in the arse as we re-learn the forgotton lessons.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      When "100% tests passed" or "100% code coverage acheieved" goals take a higher priority than "the code functions within specification", the real goal has been lost and the problem has arisen.

      The tests are supposed to be a translation-into-code of the specification. TDD is therefore supposed to be S(pec)DD. Obviously, that doesn't always happen.

      What TDD does is shift the quality assurance constraint. Instead of looking at the program, you look at the tests. Bad things happen if you think that quality assurance is no longer an issue. It's every bit as important as it is in any other paradigm; it's just supposed to be easier to verify.


      Caution: Contents may have been coded under pressure.
      One of the signs of this overzealous application of a methodology is when the important part--ie. the code--starts to be designed around or altered to accomodate the methodology.

      Actually, in this case the fact that the code is designed to accomodate the methodology is a very good thing. Because code which is the result of tests/TDD tend to have low coupling.

      Normally there is only one scenario in which code is used; the application. That means there was little outside pressure to steer the design.

      When doing TDD, there is an additional scenario in which the code must work; the tests. This forces the code to be useful outside of the application, in the fairly scarce environment that is the test suite. The code must be decoupled from the rest of the application, because... it's not there.

      The beauty of this is that you don't have to be a brilliant designer who is attentive and clever all the time in order to produce well structured code. You just follow these small steps one by one (write test, write code until tests pass, remove duplication, goto 10) and the rest ends up being an emergent property of those steps.

      Much appreciated by us bears of small brain.

      /J

      The goal of TDD done correctly is to have a specification fully codified as a test suite. The benefit is that unlike prose, code can be debugged.

      On the way there, the code does indeed end up accomodating the test suite, but that is a good thing: experience shows that code which is hard to test is quite simply hard to use.

      Obviously writing tests just for the sake of writing tests is pointless, but that’s a truism.

      Makeshifts last the longest.

        I was not attempting to critique TDD, I think the concept is a good one.

        When I said "the problem with TDD" in my first sentence, I should have stuck with the OPs word, "the danger with TDD".

        As I attempted to make clear in the second paragraph, "this problemdanger is not a fundamental problem with the methodology", or any other methodology, it is a problem of mis-application, or mis-emphasis, by some people, sometimes.

        It is a matter of balance. If the balance swings to far in favour of one aspect of the development process, then others get watered down or totally omitted.

        In the case of TDD, is easy for the programmer to write tests for the code they are going to write, instead of a test that encapsulates the specification. If their understanding of the specification is correct, then the code they intend to write will be correct and the tests will be correct.

        But if their understanding of the spec is incorrect, then they write tests to test the code they intend to write, and from that point forth, code is verified against test and test against code, and the spec is forgotten until some third party attempts to use their code in conjunction with the spec.

        Too many practitioners see the tests the programmer writes as the conclusion. They make great unit tests but very poor systems and integration tests.

        The best test of an API, is to write an application that uses it. From the spec and preferably by someone other than writes the code that implements the API, or the person that specified it. But the latter is preferable to the former.

        Having an application that uses the API is the surest way of ensuring that the API works for that application. Not just that individual functions and methods perform to requirements within the ranges of parameters that the application calls them on. But also, and perhaps more importantly, that the separate parts of the API work together and work with the rest of the application.

        That is, not just "work" as in given parameters within specified ranges, produces the correct results, but work as in "is a good fit with the other functions within that particular API, and works with any and all other APIs and data representations that the application uses".

        So you don't have one API that wants it data represented in relative format and another that takes absolute, forcing the application to constantly switch between formats when passing data between APIs. (Often as not, the one taking absolute format data will immediately convert it to relative format internally, or vice versa).

        Eg. one uses DOB, and the age; or window coordinates versus screen coordinates; imperial units versus metric; and so on.

        Once written, an application that uses an API will not only test for this type of impedance mismatch which is rarely tested by unit type testing (how many times have you used or read about APIs that "work fine but are a bitch to use"?). It will also tend to test the required range of parameters (and combinations thereof), of the individual APIs.

        This will often tend to be a subset of the full range of possibilities that the API could be called upon to handle, but unless and until another application comes along that needs a wider set of (combination of) parameters. But if that second application never comes along, it can be an expensive exercise to write code and tests (or tests and code:) to handle them.

        (Silly)eg. Writing a square root function to handle negative numbers and produce complex results would be an expensive option if it is never called upon to do so. Moreover, doing so could add further to the problems of the applications using it that would be better served by an exception being raised if passed a negative number than being returned a correct but useless imaginary result.

        None of this means that TDD is a bad thing, only that over-emphasis of one aspect to the detriment of others is bad. "Oh! We'd never make that mistake", is everyones first reaction to the type of exaggerated examples I've cited, but they do happen.

        Take the Mars Orbiter problem.

        "This is an end-to-end process problem," he said. "A single error like this should not have caused the loss of Climate Orbiter. Something went wrong in our system processes in checks and balances that we have that should have caught this and fixed it."

        Or the Short sighted Hubble telescope, the most perfectly flawed mirror ever produced up to that time.

        Hubble is working perfectly but the Universe is all blurry.

        TDD makes a virtue of having the programmer write the tests against which his code is tested. It works fine, provided that his myopic viewpoint is balanced by other techniques that give overview.

        And that is the danger I am alluding to, not only with TDD, but any methodology that is given too high an importance to the exclusion of balance. If 100% test passed and 100% code coverage is not balanced by fitness for purpose, ease of use, good impedance matching with it's foreseen applications; if unit testing is not balanced against actual or "typical usage" systems and integration testing; then the overall result can be an expensive, but very well tested disaster.

        It doesn't matter how many tests you write, or that they all pass, if you are testing the wrong thing.

        If I have one fear of TDD, it is that it can lead to tunnel vision, unless there is someone or something within the project that has the authority and brief to take a much wider overview of the project, and how it will be used.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
Re: A danger of test driven development.
by pg (Canon) on Oct 03, 2005 at 15:35 UTC

    You are obviously doing the right thing, but need to employee a better plan.

    I would guess that part of the problem is that you have enough test cases to cover the normal situations, but not enough to cover the exceptional situations.

      I agree w/ pg.

      The code came quickly back to me. Remember the hairy code I changed to pass the tests? Well, it was correct after all, and I broke it. The tests turned out to be wrong.

      It sounds like you had missing tests. If the field testers found a bug the code tweak introduced, there was a missing test case to match whatever it was they did to uncover it. For my own part, I have found meta-tests (like for the final object state or something; more like a user would see) to uncover bugs in my code better than micro-tests (checking return values from subs/methods; more like the hacker sees).

Re: A danger of test driven development.
by petdance (Parson) on Oct 04, 2005 at 14:13 UTC
    You didn't say anything about the docs. Where did the documentation for the code line up on this?

    You're not trying to get code and tests to agree. You're trying to get code AND docs AND tests to all agree. If any one of them disagrees with the other, you have a bug.

    xoxo,
    Andy

Re: A danger of test driven development.
by nothingmuch (Priest) on Oct 07, 2005 at 01:23 UTC
    WRT "someone write me a Test::Test" i have an interesting story:

    A month ago or so I was studying forth by writing a forth system... It's test suite is written in Test::Base that contains small programs and expected output.

    To help me write the system I would first read a bit about the feature I was planning to add. Then I would start writing tests, and running them against an implementation like gforth, which I know isn't too drunk. By the time tests started accumilating, and were no longer failing I usually understood the feature better due to experimentation. Additionally I would have tests that check that the behavior of a given forth system is consistent with e.g. gforth, for that particular feature.

    At that point I would switch the forth backend to use my implementation, with the missing feature, and implement things till they started working.

    It was a very nice experience =)

    -nuffin
    zz zZ Z Z #!perl