Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Re: Tool to record subroutine calls & return values

by AnomalousMonk (Archbishop)
on Apr 09, 2017 at 18:31 UTC ( [id://1187529]=note: print w/replies, xml ) Need Help??


in reply to Tool to record subroutine calls & return values

First thought: This sounds like an XY Problem. For any but a very small script, the torrent of parameter and return values from every subroutine call executed would be enormous and likely unusable. Typically, one is interested in the behavior of only a few critical subroutines. For this, print points or use of the Perl debugger (see perldebug and perldebtut) would usually be the tool of choice. Can you tell us more about why you want all call/return info?


Give a man a fish:  <%-{-{-{-<

Replies are listed 'Best First'.
Re^2: Tool to record subroutine calls & return values
by Yary (Pilgrim) on Apr 09, 2017 at 18:54 UTC
    Indeed it is an X/Y problem.

    I'm thinking about recording runs of a large legacy codebase - before and after making changes to it - as an aid to writing tests- which don't exist for this codebase. A record of which subs are actually being called, with which arguments & return values, is a good starting place showing what's in use = a starting place showing what to test.

      A record of which subs are actually being called ...

      This sounds like a job for Devel::NYTProf or another of its ilk. Other monks than I will be better able to advise on the proper module and its employment.

      ... recording runs of a large legacy codebase - before and after making changes to it - as an aid to writing tests- which don't exist for this codebase. A record of ... which arguments & return values, is a good starting place showing what's in use = a starting place showing what to test.

      This seems a bit bass-ackwards to me. The critical starting point for testing code is understanding the code (update: and its specification). This will be a huge task for a large, unfamiliar codebase. Writing tests for such a codebase will be an even hugerer task because you have to understand all the boundary cases of the invocation of each function. (Much of the time and effort of such a project may be spent cursing the name of the original coder who paid no attention to testing.)

      And here's another problem: How do you know that the application is running properly right now? Do you have a way to evaluate an application run that tells you its output is "correct," or are you just depending on the fact that the program didn't blow up or your company go out of business to assure you that, well, everything's probably OK? Generating a huge database of input arguments/return values for each subroutine may be useless or worse if it captures and enshrines unrecognized incorrect behavior.

      Refactoring a large application in the absence of a thorough test suite is a big job, and you're quite right to think that the first task is to create such a test suite. Unfortunately, there are no easy answers, and in particular, a database of assumed normal operation for each subroutine can be no more than an adjunct to a deep understanding of the application. Good luck.

      Update: Looking back over this post, it just seems like a very verbose way of saying "You're gonna need a bigger boat!" and perhaps isn't very helpful in consequence, which I regret. (But I still wish you luck!)


      Give a man a fish:  <%-{-{-{-<

      You do not need to know what's going in or coming out of each sub to begin writing a test suite.

      Knowing the code flow/stack trace is way enough to begin, and you don't even need to know that really.

      Start from the very top of your legacy code base. Write tests for the very first function encountered, with expected in and check the out. If there are subs called within the sub, mock them out to return what you *believe* it currently should so that the flow goes ok. Trickery may be required in the tests if your software does side-effects within its subs, but I digress. This is one of the reasons side-effects can be a pain in the ass.

      Do this for each sub in the entire process flow. You do not need to change absolutely anything until every single current sub is tested. At the point you've tested them all, knowing what to mock and what to allow to run through, you'll have a full test suite that will allow you to start a re-write. Write the whole shebang anew, or set up a dev environment to start making modifications. Your test suite will catch any problems now, and this is where you start expanding on the test suite you've already started.

      Most of my Perl early years was spent writing software to deal with issues exactly like this, and since then, I've never stopped. Writing code to help other developers develop is my favourite thing to do with Perl, and a good number of my own modules on the CPAN are geared around development aids and test/build work.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1187529]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (4)
As of 2024-04-18 03:17 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found