I used something simular to described above in HTTP::WebTest's tests. I had to test whenether text output matches given samples. And it too was PITA to update tests each time I change something. The solution was writting these text samples in files and adding special test mode when the text samples are updated from test results.
Take a look on my module tests and particulary on HTTP::WebTest::SelfTest (all on CPAN). Here is the relevent bit from HTTP::WebTest::SelfTest. The subroutine compare_output acts more or less like Test::More's is with a difference that it only works for text and the expected result is stored in file.
sub compare_output {
my %param = @_;
my $check_file = $param{check_file};
my $output2 = ${$param{output_ref}};
my $output1 = read_file($check_file, 1);
_print_diff($output1, $output2);
_ok(($output1 eq $output2) or defined $ENV{TEST_FIX});
if(defined $ENV{TEST_FIX} and $output1 ne $output2) {
# special mode for writting test report output files
write_file($check_file, $output2);
}
}
# ok compatible with Test and Test::Builder
sub _ok {
# if Test is already loaded use its ok
if(Test->can('ok')) {
@_ = $_[0];
goto \&Test::ok;
} else {
require Test::Builder;
local $Test::Builder::Level = $Test::Builder::Level + 1;
Test::Builder->new->ok(@_);
}
}
So my workflow was: change something, run tests, see that it fails as I expect (by inspecting diffs), run tests again in self-update mode.
| [reply] [Watch: Dir/Any] [d/l] |