Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re: Testing methodology, best practices and a pig in a hut.

by goibhniu (Hermit)
on Feb 27, 2008 at 21:31 UTC ( [id://670762]=note: print w/replies, xml ) Need Help??


in reply to Testing methodology, best practices and a pig in a hut.

Capabilitiy Maturity

I once worked with a company trying to achieve CMM level 3. Most at the company (developers and sysadmins alike) hated it. There was a feature in their process that I very much appreciated and have tried to keep in mind at other places I've worked.

They had a person whose role was titled "SQA". This person was responsible for reviews of every project at every project phase transition (already too much process for most people). A phase transition, eg, would be to get your project out of "Requirements" into "Design", or out of "Development" into "Testing". The thing was that at the beginning of the project, the project sponsor / project manager / tech lead would negotiate with SQA as to what deliverables would be part of the project.

They had an exhaustive list of standard deliverables (and a library of deliverable templates to get started on), but they also had specific lists based on project size. The base list for "small" projects was, well, smaller than the base list for "large" projects.

If you were a developer and wanted to sponsor a project with an order of magnitude less than 1 month of man hours, you could print out the "Small Projects Deliverables Checklist", take it to SQA and negotiate which deliverables you'd do in which phases of the project. The "Small" list was less than a page. And if that was too much, you could negotiate with SQA and put checkmarks next to the ones you were going to do and, if he agreed, you and he would sign it and that would be what you'd do. Then to get yuor project from one phase to the next, you'd show you'd done those. For "Small" project, even some phase transition reviews were not required.

If you were on a "large" project, the list of deliverables might be ten pages or so with various levels of more detailed requirements, design and functional specs, test plans and even meta-project documentation like team communications plans and GANNT charts, etc. Still, like the small projects, at the beginning of the project the project sponsor or manager (or both) and SQA would put check-marks next to the ones they would commit to do, and that became the contract for all the future phase transition reviews for the project.

People still hated all that process for all the heft that it (and it's waterfall model) implied. Also, the SQA guy got the reputation for being a hard-ass, as you might expect. Still, the cool thing that I took away was the flexibility to tailor the process to the correct size for the project. Most of the nay-sayers in the organization conveniently ignored that part of process in all their bad-mouthing.

Why Software Quality Assurance Practices Become Evil!

I liked the system proposed by Gregory Pope in the paper BrowserUK pointed us to. It reminds me of the SQA checklist I saw before, but adds a layer of sophistication in that it assesses the process based on risk, not just size (perhaps a "small" project might really threaten life and property and warrant a formal test plan, etc.).

Test::*

As to Test::*, I don't have enough experience good or bad to pass judgement. It did seem to me that Does anybody write tests first? attracted a lot of cargo cult responses. There was nothing in particular to suggest that amarquis was thinking of a high-risk or low-risk project, but he did seem to be talking about small projects growing organically. In that case, the Test::* options might be overkill, but I would still be talking to the imaginary SQA in my brain to negotiate how I would satisfy the principle of "preventing, detecting, and removing defects from the software system".

The last question I'd ask, however, is how many times do we re-ask the same question. Those that grab Test::* instinctively are saving some time by using tools they know and are familiar with. BrowserUK's response is that they may be spending more than they're saving depending on the situation. The company I worked for had a cool mechanism defined in their process that helped minimize the upfront analysis so that project teams could jump into the problem without spending alot of time on the meta-problem. I admit they didn't do everything perfectly and they had a long way to go, but I haven't worked for a company since then that was as intentional about pushing up their "Capability Maturity".


#my sig used to say 'I humbly seek wisdom. '. Now it says:
use strict;
use warnings;
I humbly seek wisdom.
  • Comment on Re: Testing methodology, best practices and a pig in a hut.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://670762]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chilling in the Monastery: (10)
As of 2024-04-19 08:47 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found