http://qs321.pair.com?node_id=587355

skazat has asked for the wisdom of the Perl Monks concerning the following question:

Heya,

Does anyone know of a swanky web-based way to run perl test files?

The situation I'm facing is this:

* I'd like to deploy the tests on a web-based application, making sure nothing fails, when I do an actual install for this app - installation mainly means, uploading some files via FTP, setting some config parts and changing the permissions of the main script. Pretty easy stuff.

* My development "community" isn't really that active in checking out dev. copies of the program, running the tests included and reporting back, so this usually falls on my back. I personally don't have a million different testing platforms, so it becomes difficult to mark something as, "stable", even if it passes tests on my few different environments. I also don't have an incentive to purchase or cobble together different environments, because of laziness and lack of monetary resources.

* Sometimes I don't have access to a CLI -0 el-cheapo web hosts sometimes don't give out ssh access, so I thought having a web-based way would save me some trouble, since the app is mostly a web-based app anyways. I was looking for something pretty polished, as it would be an incentive to bundle the test suite (a directory of .t files) with the app itself and have a friendly way for any user to run the tests out and report back.

Basically, hacking together this:

#!/usr/bin/perl -w use CGI qw(:fatalsToBrowser); use CGI qw(:standard); $|++; my $Test_Files = './t'; print header(); print "<h1>Starting...</h1>"; my $test_files = get_test_files(); foreach $file(@$test_files){ print "<h2>$file</h2>"; print "<hr><pre>"; # print `perl $Test_Files/$file`; print `prove -r -v $Test_Files/$file`; print "</pre>"; } print "<h1>Done!</h1>"; sub get_test_files { my @tests; if(opendir(TESTDIR, $Test_Files)){ my $tf; while(defined($tf = readdir TESTDIR) ) { next if $tf =~ /^\.\.?$/; $tf =~ s(^.*/)(); if($tf =~ m{\.t$}){ push(@tests, $tf); } } closedir(TESTDIR) or warn "couldn't close: " . $Test_Files; } return \@tests; }

Does pretty much what I want, albeit in a not so very pretty way, but that's easily polished up. Some problems I can see happening could possibly be: browser timeouts from a script that takes a long time running all the tests, a screen of HTML that seems to never end, misc. problems that aren't really reported in the browser, environment differences between running a script via the CLI and your web browser, etc.

Has anyone thought of a more interesting solution?

Again, I just want to *run* tests in my web-browser, I don't want to run tests *as* a web browser.

 

-justin simoni
skazat me