Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

comment on

( [id://3333]=superdoc: print w/replies, xml ) Need Help??

I'll be giving a talk at work about improving our test automation. Initial ideas are listed below. Feedback on talk content and general approach are welcome along with any automated testing anecdotes you'd like to share. Possible talk sections are listed below.

Automation Benefits

  • Reduce cost.
  • Improve testing accuracy/efficiency.
  • Regression tests ensure new features don't break old ones. Essential for continuous delivery.
  • Automation is essential for tests that cannot be done manually: performance, reliability, stress/load testing, for example.
  • Psychological. More challenging/rewarding. Less tedious. Robots never get tired or bored.

Automation Drawbacks

  • Opportunity cost of not finding bugs had you done more manual testing.
  • Automated test suite needs ongoing maintenance. So test code should be well-designed and maintainable; that is, you should avoid the common pitfall of "oh, it's only test code, so I'll just quickly cut n paste this code".
  • Cost of investigating spurious failures. It is wasteful to spend hours investigating a test failure only to find out the code is fine, the tests are fine, it's just that someone kicked out a cable. This has been a chronic nuisance for us, so ideas are especially welcome on techniques that reduce the cost of investigating test failures.
  • May give a false sense of security.
  • Still need manual testing. Humans notice flickering screens and a white form on a white background.

When and Where Should You Automate?

  • Testing is essentially an economic activity. There are an infinite number of tests you could write. You test until you cannot afford to test any more. Look for value for money in your automated tests.
  • Tests have a finite lifetime. The longer the lifetime, the better the value.
  • The more bugs a test finds, the better the value.
  • Stable interfaces provide better value because it is cheaper to maintain the tests. Testing a stable API is cheaper than testing an unstable user interface, for instance.
  • Automated tests give great value when porting to new platforms and when upgrading existing ones.
  • Writing a test for customer bugs is good because it helps focus your testing effort around things that cost you real money and may further reduce future support call costs.

Adding New Tests

  • Add new tests whenever you find a bug.
  • Around code hot spots and areas known to be complex, fragile or risky.
  • Where you fear a bug. A test that never finds a bug is poor value.
  • Customer focus. Add new tests based on what is important to the customer. For example, if your new release is correct but requires the customer to upgrade the hardware of 1000 nodes, they will not be happy.
  • Documentation-driven tests. Go through the user manual and write a test for each example given there.
  • Add tests (and refactor code if appropriate) whenever you add a new feature.
  • Boundary conditions.
  • Stress tests.
  • Big ones, but not too big. A test that takes too long to run is a barrier to running it often.
  • Tools. Code coverage tools tell you which sections of the code have not been tested. Other tools, such as static (e.g. lint, Perl::Critic) and dynamic (e.g. valgrind) code analyzers, are also useful.

Test Infrastructure and Tools

  • Single step, automated build and test. Aim for continuous integration.
  • Clear and timely build/test reporting is essential.
  • Keep metrics (via test metadata, say) on the test suite itself. Is a test providing "value". How often does it fail validly? How often does it fail spuriously? How long does it take to run?
  • Aim for around 80% code coverage (for most applications 100% code coverage is not worth it).
  • It's vital to quarantine intermittently failing tests quickly and to fix them quickly ... only returning them to the main build when reliable (if you don't do that, people start ignoring test failures!). No broken windows.
  • Make it easy to find and categorize tests. Use test metadata.
  • Integrate automated tests with revision control, bug tracking, and other systems, as required.
  • Divide test suite into components that can be run separately and in parallel. Quick test turnaround time is crucial.

Design for Testability

  • It is easier/cheaper to write automated tests for systems that were designed with testability in mind in the first place.
  • Interfaces Matter. Make them: consistent, easy to use correctly, hard to use incorrectly, easy to read/maintain/extend, clearly documented, appropriate to audience, testable in isolation.
  • Dependency Injection is perhaps the most important design pattern in making code easier to test.
  • Mock Objects are frequently useful and are broader than unit tests - for example, a mock server written in Perl (e.g. a mock SMTP server) to simulate errors, delays, and so on.
  • Consider ease of support and diagnosing test failures during design.

Test Driven Development (TDD)

  • Improved interfaces and design. Especially beneficial when writing new code. Writing a test first forces you to focus on interface - from the point of view of the user. Hard to test code is often hard to use. Simpler interfaces are easier to test. Functions that are encapsulated and easy to test are easy to reuse. Components that are easy to mock are usually more flexible/extensible. Testing components in isolation ensures they can be understood in isolation and promotes low coupling/high cohesion. Implementing only what is required to pass your tests helps prevent over-engineering.
  • Easier Maintenance. Regression tests are a safety net when making bug fixes. No tested component can break accidentally. No fixed bugs can recur. Essential when refactoring.
  • Improved Technical Documentation. Well-written tests are a precise, up-to-date form of technical documentation. Especially beneficial to new developers familiarising themselves with a codebase.
  • Debugging. Spend less time in crack-pipe debugging sessions. When you find a bug, add a new test before you start debugging (see practice no. 9 at Ten Essential Development Practices).
  • Automation. Easy to test code is easy to script.
  • Improved Reliability and Security. How does the code handle bad input?
  • Easier to verify the component with memory checking and other tools.
  • Improved Estimation. You've finished when all your tests pass. Your true rate of progress is more visible to others.
  • Improved Bug Reports. When a bug comes in, write a new test for it and refer to the test from the bug report.
  • Improved test coverage. If tests aren't written early, they tend never to get written. Without the discipline of TDD, developers tend to move on to the next task before completing the tests for the current one.
  • Psychological. Instant and positive feedback; especially important during long development projects.
  • Reduce time spent in System Testing. The cost of investigating a test failure is much lower for unit tests than for complex black box system tests. Compared to end-to-end tests, unit tests are: fast, reliable, isolate failures (easy to find root cause of failure). See also Test Pyramid.

Test Doubles

  • Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
  • Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an InMemoryTestDatabase for example).
  • Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed for the test.
  • Spies are stubs that also record some information based on how they were called; for example an email service that records how many messages were sent.
  • Mocks are pre-programmed with expectations which form a specification of the calls they are expected to receive; they can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting. Note that only mocks insist upon behavior verification. The other doubles can, and usually do, use state verification. Mocks behave like other doubles during the exercise phase because they need to make the SUT (System Under Test) believe it's talking with its real collaborators - but mocks differ in the setup and the verification phases. While mocks are valuable when testing side-effects, protocols and interactions between objects, note that overuse of mocks inhibits refactoring due to tight coupling between the tests and the implementation (instead of just the interface contract).

See also:

Testing Memory and Threads

Testing Tools

Test Anything Protocol (TAP)

Types of Testing

  • Static testing. Code review by humans and static code analysers (e.g. lint, Perl::Critic).
  • Passive testing. Contrary to active testing, testers do not provide any test data, just examine system logs and traces.
  • Dynamic testing. Unit tests, Integration tests, System tests, Acceptance tests, ...
  • Dynamic program analysis. e.g. Purify, Valgrind, ThreadSanitizer, ...
  • Exploratory testing. Simultaneous learning, test design and test execution.
  • Performance testing. Stress testing. Load testing.
  • Usability testing.
  • Regression testing.
  • Acceptance testing.
  • End-to-end testing.
  • Security testing.
  • Equivalence partitioning.
  • Critical path testing.
  • Failover testing.
  • Internationalization testing.
  • Smoke testing.
  • Alpha, Beta testing.
  • ... and many more :)

References Added Later

CPAN Testing Tools

General References

Related References

Updated: many extra references were added long after the original node was written. 2019: Added Test Doubles section. 2021: Added Types of Testing section. 2023: Added links to C++ examples using Catch2 and Google Abseil library.


In reply to Effective Automated Testing by eyepopslikeamosquito

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":



  • Are you posting in the right place? Check out Where do I post X? to know for sure.
  • Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
    <code> <a> <b> <big> <blockquote> <br /> <dd> <dl> <dt> <em> <font> <h1> <h2> <h3> <h4> <h5> <h6> <hr /> <i> <li> <nbsp> <ol> <p> <small> <strike> <strong> <sub> <sup> <table> <td> <th> <tr> <tt> <u> <ul>
  • Snippets of code should be wrapped in <code> tags not <pre> tags. In fact, <pre> tags should generally be avoided. If they must be used, extreme care should be taken to ensure that their contents do not have long lines (<70 chars), in order to prevent horizontal scrolling (and possible janitor intervention).
  • Want more info? How to link or How to display code and escape characters are good places to start.
Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others studying the Monastery: (6)
As of 2024-04-19 16:25 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found