This is PerlMonks "Mobile"

Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Between "Dealing with the QA guy..." and pg's response, it got me thinking - and it was complete agreement with pg. The QA guy is always right. Even the rare complete idiot QA guy is still pretty much right, even when s/he's wrong.

The QA guy's job pretty much is:

  1. Start with the spec of the program.
  2. Develop a test plan. (Ideally, the test plan is what drives the spec, so the developer has already done this - but even in those ideal scenarios, a little bit of black-box testing can still reveal holes.)
  3. Run the tests.
  4. Get all successful.
I'm not entirely clear where here the QA guy can go wrong. The spec isn't his. The test plan is an interpretation of the spec - if the test plan is wrong, most likely the spec wasn't clear, and we go back to the spec not being his. Running the tests - if any test the QA throws at your code isn't what the test plan says, we call it "ad hoc testing" and it's still right. Finally, if any test, whether planned or "ad hoc", fails, well, again, the QA guy can't be blamed for that, can he. Not his code.

Now, let's say he does something that he's not supposed to do. Is that his fault? Well, if he gets a reasonable error message, he just tested your boundary conditions. If he doesn't, then you're not handling your boundary conditions properly, and you need to address that. Ideally, these are part of the test plan. If not, they probably weren't in the spec anyway.

If the user interface leads him down a forbidden path, again, some ad hoc testing may find an unaddressed boundary condition.

At the very least, the QA guy is finding bugs before the customer/end-user. And that is always good. Customers/end-users aren't always the most intelligent, so if we can help them around the dumb problems by ensuring we get usable messages, that's a plus, too. Even when the QA guy is "wrong", he's still right - in helping to clean up the messages!

From my own most-recent experience going through a test cycle (which we're about to get into again within the next week or so), I have two other observations. First is that I love new hires. And co-op/IIP students. Basically, non-indoctrinated people. People who don't know what can't be done. Not for writing the test plan, but for executing it. When something doesn't look right, they don't know it isn't supposed to work, and will dutifully open a defect against it. I love it. They try some of the weirdest things - few of our customers will be that weird. So if these guys don't catch a bug, no one will. (Yes, we need some really advanced people testing, too, to catch the bugs that only our really advanced customers will find.)

Second is the principle of opening defects. We love it. The rule is that if you're not sure, open a defect and let the developer figure it out. Developers who take it out on the QA folks will get a stern reproach from their team lead as it was the team leads that came up with the policy. I'd rather get a questionable defect than no defect. It allows us to prioritise problems, even if that means we simply have to document a limitation (last resort). If you find something that is the same as someone else, fine. One of them will get returned as a duplicate of the other. That's not a big deal. It's better than neither person opening a defect, and shipping with the problem. (Well, we ship with the odd problem anyway, but at least it's better to have an informed bad decision than an uninformed bad decision.)

Replies are listed 'Best First'.
Re: Dealing with the QA guy ... (no, really)
by pg (Canon) on Sep 27, 2005 at 04:43 UTC

    So true.

    Just add a little bit, sort of coming in sideways. During last half year, I ran into the same issue several times. The issues are with the requirements. According to the procedure we have, once the requirements is documented, we don't go back to it until the user acceptance testing. When we do unit testing, we test against program spec, not the requirements; when we do system testing, we test against the design, not the requirements. There were several times, we got bad requirements, then there come the "good" design based on the "bad" requirements, as well as the "good" program spec based on the "good" design (that is based on the bad requirements). Then you do coding and unit testing, which go well, as everything meets the spec. System testing is also fine, as everything meets the design.

    All those effort are wasted... when it comes to user testing, user don't really test against what is documented, they test against their interpretation of the requirements (or to be more precise: the document didn't really express the true user requirements. When the IT side and the user side agreed upon the document, they thought that they understood the words in the same way, but they didn't.)

    Human communication... the single problem that we can never get rid of...

      According to the procedure we have, once the requirements is documented, we don't go back to it until the user acceptance testing.

      I believe this is one of the potential stumbling blocks in software development that agile and iterative methodologies attempt to address. By keeping customers close to the developers, developers can short-cut the wasted-effort cycle when requirements are vague. By developing in short timeboxes, the amount of drift to user acceptance testing is minimized.

      The Extreme Perl website expresses the problem reasonably well. Heavy, plan-driven methodologies are optimizing to minimize implementation risk, whereas agile methodologies are optimizing to minimize requirements risk -- which many would agree from experience is often both more likely and of higher overall impact.

      -xdg

      Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

      Hi All,

      I as a representative of QA/Testing community would agree with the above discussion. I am not a Tester by profession but by virtue.

      As mentioned above , I do ad-hoc testing and it has been succesful most of the times giving an error/crash/hang.

      Once my manager was so impressed that he asked me to write cases that are beyond what the spec or test plan tells

      Well coming to the case of Whats happening with Anonymous monk and his QA guy... I do agree there are some morons in every profession...

      Well infact in my previous work place I being a tester did challenge the IIT guys that I would find a crash before the release and so did I and the quality thus improved.

      If you want to tame the QA guy write or learn "how to write solid code" , do some good unit testing , ensure he has not much to do.

      Moreover as you jotted your problem here , write a perl script which will test your code ;-) Regards

      Prad
Re: Dealing with the QA guy ... (no, really)
by chester (Hermit) on Sep 27, 2005 at 13:22 UTC
    I'd hesitate to say QA is always right. There's the potential that the tester has no idea what he's doing, and is testing for the right thing, but testing in the wrong way. Or is making outright factual errors. I'm thinking "Bug: 2+2 should not = 4" or "Bug: Pressed Quit button and program terminated" sorts of things. Then again, in that case he may be representative of the typical user.

    (I remember sitting in high school almost a decade ago, learning C. Someone finished a typical "Prompt for input until valid input is received, then do something with it" exercise. I asked the writer if I could test his program for bugs for him. He agreed. I merrily proceeded to mash the keyboard like I was trying to produce a Beethoven concerto. After a series of loud beeps, the program went into an infinite loop. "Bug there, fix it", I advised. But again, probably a good representative of the typical user.)

      I merrily proceeded to mash the keyboard like I was trying to produce a Beethoven concerto. After a series of loud beeps, the program went into an infinite loop. "Bug there, fix it", I advised. But again, probably a good representative of the typical user.)

      Certainly representative of the typical user who has a cat. ;-)

      I would also agree that they are definitely not always right. In particular, specs almost never detail every interaction and some of it is assumed by the type of environment (eg, web pages behave a certain way, etc).

      When the QA tester is ignorant of the assumptions, when they take liberties in how the unspecified parts of the system can behave, or when they don't understand the spec (which can be fairly technical) then they often do make mistakes in their reports.

      -- More people are killed every year by pigs than by sharks, which shows you how good we are at evaluating risk. -- Bruce Schneier

        I wasn't sure if I was going to respond, but your sig made it mandatory.

        First, the original topic. If your system doesn't handle boundary (or out-of-bounds) conditions properly, why don't you want to know that? You mention web-pages, which makes it even more important. If your CGI code can't handle these conditions, you may be subject to a DOS attack or other hack that may compromise your data (either exposing it or destroying it - either one is bad). Give me that "not right" QA tester any day of the week over one who has implicit assumptions built in that prevents them from bothering to test these scenarios!

        Second, the sig. I hope that was something that Bruce said in jest, although according to his own site, it doesn't seem so. Most of his points seem good, but you've picked out the least sound of it. It's sort of like saying that this year, more WinXP machines will crash without warning than Win3.1 machines. That's not an indicator of risk, that's a statement of exposure.

      ...I merrily proceeded to mash the keyboard...

      Hey, I used to get paid for doing that...back in my QA days testing firmware, I decided to mash the keyboard, and found that if there were too many "key-down" signals, it caused a buffer overflow and hung the terminal.

Re: Dealing with the QA guy ... (no, really)
by aufflick (Deacon) on Sep 29, 2005 at 12:45 UTC
    One thing I would differ on is whether QA is responsible for detecting duplicates or not. Sure, some duplicates are not obvious to someone without knowledge of the code, but it is just bat QA form (IMO) to submit multiple defects that match nearly word for word. (In fact I have had one case recently where two members of the same QA team produced two defect reports a few days apart that were exactly word for word!)
      Finally, if any test, whether planned or "ad hoc", fails, well, again, the QA guy can't be blamed for that, can he. Not his code. The biggest problem I see (and experience) is that blame part. Why is blame being placed at all? The developer isn't good at writing specs. Help them get better! The developer wrote a few bugs. DUH! Just try to do the same thing with no bugs...and good luck to ya! I don't have a problem with QA identifying defects in my code, just don't approach it with the attitude that I suck at what I do. Cause I don't (most of the time). Then, of course, there's the fact that some developers create applications that are viewed on different platforms (think web development). There are certain things that, without a significant amount of effort for an insignificant different just isn't worth even talking about. I ran into one of these day, QA submitted the bug, God Bless him, and I have no choice but to relegate it to the black hole that is the "known issues" that will never EVER be dealt with, because it's just more effort than its worth. Communication is, as always the key. Developers shouldn't disdain QA, and won't, as long as they're not looked at as those sucky humans who can't produce bug-free code.