Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re: Creating a co-operative framework for testing

by xdg (Monsignor)
on Oct 23, 2006 at 15:23 UTC ( [id://580072]=note: print w/replies, xml ) Need Help??


in reply to Creating a co-operative framework for testing

There's a little bit of an XY Problem here. Your task is to "organize a framework for cooperation between the Quality Assurance department and the community". But your questions are about the Perl testing infrastructure, which has little to do, directly, with community involvement.

In my view, the Perl testing infrastructure is effective for several reasons:

  • Running tests was made part of the module installation cycle: make; make test; make install

  • Writing tests was made easy, through a good framework (Test::Simple, Test::More and friends) and good instructions (e.g. Test::Tutorial).

  • Lots of evangelism and infrastructure support (as noted in other responses).

However, this is all about module authors writing tests, not about community feedback into the testing process. I count myself lucky to have bug reports that include a patch -- and I almost never get a bug report that includes a test file demonstrating the bug. I wrote "The value of test-driven bug reporting" to encourage more of it.

So, back to your task -- coordinating QA and community -- I think you need to look beyond the Perl testing infrastructure. You need to look at collaborative projects -- perhaps in Perl, perhaps elsewhere -- and see what works and what doesn't.

For example, there was/is the Phalanx project. Here in NY, the local Perl group collaborated to work on two Phalanx modules, and had a devil of a time getting their work incorporated back into the modules by the author. That's led to a general disinterest in repeating that process. People want some sort of sense of feedback and accomplishment from their work.

I think a better example is Pugs -- audreyt's Perl 6 interpreter. Commit bits are handed out freely. And there's a real emphasis on automated testing with something like 18,000 tests (if I recall correctly). Look at the Smoke Reports -- in particular, drill into the details and look at some of the graphical test output. This may be a model to emulate.

For your task, I offer these suggestions:

  • Put your test suite into a repository and give out commit bits liberally. Make it easy for people to contribute tests.

  • Create automated smoke tests against the repository so people can quickly see that their tests are included and what the results are.

  • Write good, easy documentation to help people get started writing tests in your framework.

  • If using Perl, consider tools like Perl::Critic to help identify contributed tests with poor style that your QA department should clean up. Don't use it as a gateway to make contributions harder, just use it to QA your QA.

  • Track and publicize contributions: aim for reputation-based reward rather than monetary or other awards. (Look at the XP system on Perl Monks for example.)

I hope this sparks some useful thinking. Best of luck.

-xdg

Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://580072]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (6)
As of 2024-04-24 07:42 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found