|Just another Perl shrine|
OK, firstly, dont be nervous.
You should take great heart from the fact that you want to do testing, and that you want to get a great deal of value and improvement in your codes quality from that testing - bravo, that makes you at least twice as good a developer than most, IMO.
I do recommend you wrap your app and testing up in the standard ExtUtils::MakeMaker framework. Apart from the fact that Devel::Cover (D::C) works very well with it, you do get a lot more benefits - manifests, distribution, etc.
I do have a basic tutorial on writing a simple Makefile.PL and running tests at Perlmeme.
But it doesnt go into coverage testing. So to get D::C to report on your coverage from a 'make test' invocation, do this
Note that if you have more than one t file, D::C will merge the results for you, in answer to one of your questions
After that has run, you really should then run the reporting tool in D::C called 'cover' and use your favourite browser to look at the 'coverage.html' file that 'cover' generates. I usually delete old coverage html reports before each run of D::C, so my command line for a coverage test usually looks like this
(NB: if 'make test' reports a failure, that last 'cover' command wont run, so you may have to do it by hand or replace the last && with ||)
Which finally brings us to interpreting the numbers.
Hopefully you see the following columns in the coverage.html
Note that the individual file reports do list the number of times each line was run, and the same number as a link in the sub column jumps to the page showing which subs have and have not been run - not sure why it does that.
The times column is relative time, as a percentage - probably not to useful, except to show where most of the time testing was spent - I wouldn't rely on it for any performance profiling or benchmarking.
Generally, it is not too hard to get 100% coverage of sub and stmt columns, and quite hard to get 100% in the branch and cond columns.
Sometimes though, no matter how hard you try, you cannot provoke a line to be executed, or a branch to be followed - this may be a sign that the unexecuted code can never be executed - for example
Now this is exactly the kind of thing coverage testing is good at showing - some obviously wrong logic, that results in some code never getting executed. When you come to the conclusion that the existing logic can never be satisfied, you need to make a decision as to whether to change or remove the logic - and of course, write a some tests to prove the decision is correct.
Please keep in mind that D::C is still in beta, and it does prefer to play with a fairly recent perl, so sometimes you just have to accept that D::C is wrong. Currently I have problem where 'make test' reports 100% pass rate, but 'make test' under D::C has one test file die in a funny way, and hence the pass rate < 100%. It can be quite hard to find what it is that D::C doesnt like about your code - the perl QA mailing list can be your friend in cases like this.
I wrote a meditation on things I learned in getting some modules to have 100% coverage.
Also, the phalanx project is trying to get the 100 most popular CPAN modules to have 100% coverage.
...it is better to be approximately right than precisely wrong. - Warren Buffet