http://qs321.pair.com?node_id=1123862

I'll be giving a talk at work about improving our test automation. Initial ideas are listed below. Feedback on talk content and general approach are welcome along with any automated testing anecdotes you'd like to share. Possible talk sections are listed below.

Automation Benefits

Automation Drawbacks

When and Where Should You Automate?

Adding New Tests

Test Infrastructure and Tools

Design for Testability

Test Driven Development (TDD)

Test Doubles

See also:

Testing Memory and Threads

Testing Tools

Test Anything Protocol (TAP)

Types of Testing

References Added Later

CPAN Testing Tools

General References

Related References

Updated: many extra references were added long after the original node was written. 2019: Added Test Doubles section. 2021: Added Types of Testing section. 2023: Added links to C++ examples using Catch2 and Google Abseil library.

Replies are listed 'Best First'.
Re: Effective Automated Testing
by choroba (Cardinal) on Apr 20, 2015 at 08:42 UTC
    Just an anecdote:

    My task was to add a new feature to our product. As the feature was rather complicated, I created some tests along coding it. When the release date was near, the project manager asked me whether I was finished. Almost, my reply was, I'm still working on the tests. Don't waste your time, tests aren't part of the task, said he. Nevertheless, I finished the tests as well as the task the next day, still several days before the deadline. At almost the same time, the client changed the requirements and we had to add some additional features. Thanks to the tests, it didn't take me more than one hour. I can't imagine what I'd have done if the tests hadn't been there. Since then, I've created tests several times, but there hasn't been any complaint from the manager about wasting my time.

    لսႽ† ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ
Re: Effective Automated Testing
by einhverfr (Friar) on Apr 21, 2015 at 15:05 UTC

    I have a different view on when and what to test. I would say that a test when breaks when you fix a bug is a bad test. A test that never finds a bug may be a bad test or a good test.

    A lot of people do TDD with the idea that 100% test coverage is something to shoot for in itself. I am not in that camp. To me, contract oriented design and testing go hand in hand. You don't want to test every possible behavior because your view on the behavior may be wrong and there are legitimate areas you want to reserve the right to change your mind without breaking your tests,

    Instead you want to test guarantees. What do you promise? Why? What are the corner cases there you need to check? Get those tested. You will usually find that results in high test quality and coverage, but not 100%, and that fixing bugs rarely breaks tests, that tests which break are showing you bugs.

Re: Effective Automated Testing
by RonW (Parson) on Apr 20, 2015 at 20:07 UTC
    A test that never finds a bug is poor value.

    Depends on why it never finds a bug. Which highlights the importance of testing the tests.

    Ideally, as long as it can be demonstrated that the tests are correct, then you want the tests to find no bugs.

    FWIW, our testing manager complains loudly when his team finds any bugs. In development, we do, of course, run tests, both automated and manual, both our own tests and our "ports" of their tests. Unfortunately, we can't directly run their tests because the testing team uses LabView and we don't. We've asked many times for "run time only" LabView licenses, but, so far, we have not succeeded in explaining to the C-Level managers how LabView would be useful to us.