http://qs321.pair.com?node_id=281395

I've recently begun work on a purely graphical module, actually an extension to the SDL::OpenGL module to support nVidia's Cg shader language. As this is the first big XS project I've done for a good while I'm trying to be even more stringent than I normally am when it comes to testing it.

The issue comes from the fact that the only tests I can run unattended are simple 'automatic' things, whereas I'd quite like to test that the image on screen looks right ('Is the teapot green?', 'Can you see the chessboard reflected in the car's body?')

If this wasn't bad enough a lot of the functionality relies on having a valid OpenGL rendering window before it can even think of working, this means that installing without first loading the GUI is going to be strange. In essence my module cannot properly initialise itself without this kind of environment being set up. Admittedly the module itself is useless without this so I'm not quite so bothered about that, it only hurts when installing it remotely.

I'm wondering what people's opinions are with regard to the testing of modules such as this (And GUI widget modules, and keyboard input modules etc). Is there a varient to the standard testing methodology for such modules?

Should I build a complete test system and demand the user sit there whilst I show them some nice pictures to test functionality? Maybe a build system that can tell when it's running non-interactive and so skips the more advanced tests, but still runs it's non-interactive graphical tests so still needs the GUI running? Is it really acceptable to allow someone to finalise the install based on a headless connection's limited set of tests?

Obviously I will have a full test suite during development anyway to detect errors slipping into the code as it's torturously refactored and extended.

What level of testing is absolutely required to be run for a module to be installed? How much inconvenience during installation can be traded off for this safety?

  • Comment on How to test Interactive/Graphical Modules

Replies are listed 'Best First'.
Re: How to test Interactive/Graphical Modules
by halley (Prior) on Aug 06, 2003 at 15:35 UTC
    When it comes to graphics "smoke" tests, one common approach is to capture the currently rendered image and then compare it bitwise or numerically to a human-sanctioned example. Sometimes two or four pixels are all you need to test, while other times you need a major chunk. Often, thanks to vendor/hardware/implementation issues, you should allow certain tolerances, such as RGB within 5 points of expected, or 99% of pixels are as expected.

    If you design your tests carefully, you can have a "training" and a "smoking" mode. If there are no saved comparison results, it needs to "train" and save the current results as good. If there are comparison results already packaged with the tests, then the current code output needs to be checked against them in order to pass the smoke test. Testing isn't debugging: a third mode for debugging should be the only mode that requires user intervention.

    Unfortunately, this means you probably need to write a comparitor that works well for you. I hope this is a call-to-action for you to make a nice PerlMagick or Image::* solution into a new Test::Images module with a looks_ok() for everyone. Take two images, subtract one from the other, and look for nonzero values as being unexpected differences.

    --
    [ e d @ h a l l e y . c c ]