OceanPerl has asked for the wisdom of the Perl Monks concerning the following question:
Hello gurus,
I have three files, let's call them "main", "lib1" and "lib2".
As you may imagine, the "lib" files contain functions called by "main".
Whilst I can easily "grep" out the names of the functions from the "lib" files, what I would *really* like to be able to do would be to highlight their usage in the main file. Either by prefixing the lines in the main file with some sort of flag, or by printing out a report the lines numbers where they are used in main.... or both
Unfortunately I'm at a loss as to where to start, and if I'm being honest I wouldn't even know what sort of search terms to use on Google ... ;-)
So it is therefore with much intrepidation that I put myself forward to the lions at Perlmonks.....
Thanking you most kindly in advance for your most valuable contributions.
Re: Using perl for refactoring
by mr_mischief (Monsignor) on Dec 09, 2014 at 15:14 UTC
|
Have you looked into using pre-existing cross-reference tools?
Searching Google for "xref for PHP" I got these hits that look interesting:
I then searched for "PHP cross reference" to turn up these additional ones:
It strikes me, though, that you may want a more general tool. OpenGrok comes to mind. So does LXR.
| [reply] |
Re: Using perl for refactoring
by GrandFather (Saint) on Dec 09, 2014 at 11:28 UTC
|
Well, start by parsing the .pm files for subs and build a list of them, then parse the .pl file looking for the sub names being used.
Even a simple /^\s*sub\s+(\w+)/ match to find the subs would get you 99% of the way there for most purposes. Then assemble a regex to match the sub names you found already and use that to search the main.pl for places the subs are used. Easy peasy.
Perl is the programming world's equivalent of English
| [reply] [d/l] |
|
sh
find $dir -name '*.pm' \
| xargs awk '/^[ \t]*sub[ \t]+[_a-z0-9]+/ { print $2 }' \
| xargs -I % fgrep --word-regexp % *.pm
| [reply] [d/l] |
Re: Using perl for refactoring
by pajout (Curate) on Dec 09, 2014 at 11:10 UTC
|
Helo,
would you be more specific, for instance, would you provide some minimal example?
I can imagine you want to get oriented graph, describing which file (module, library) uses something from which file. Or, seeing something like $instance->method(), you want to know in which module is method implemented - this second example is not generally solvable, coz it may depend on some input parameters and is resolved in runtime. | [reply] |
|
Hi Pajout
Of course. The code I'm faced with is PHP, but I think Perl is probably more suited to the tasks such as this where text parsing is involved, hence the post here.
The code I am looking at refactoring for a specific application is published open-source by Yubikey on Github, so the relevant files are as follows:
https://raw.githubusercontent.com/Yubico/yubikey-val/master/ykval-verify.php
https://raw.githubusercontent.com/Yubico/yubikey-val/master/ykval-common.php
https://raw.githubusercontent.com/Yubico/yubikey-val/master/ykval-synclib.php
So synclib and common are the libraries, and verify is the main file.
| [reply] |
|
| [reply] |
|
...thanks. And, what output of that perl software do you expect?
| [reply] |
|
| [reply] |
Re: Using perl for refactoring
by sundialsvc4 (Abbot) on Dec 09, 2014 at 17:04 UTC
|
Well, Perl gives you several different ways to “split a program into multiple modules,” and (especially when dealing with older, legacy applications) it can be hard to “usefully generalize” such things. Instead of trying to go for a cross-referencing tool or any sort of “[automated] refactoring” tool ... instead of chasing seriously after what’s likely to be a white-rabbit ... I would simply look at each of the subs that are known to be provided by, say, lib1, and use “good ol’grep -iw” to chase-down occurrences, one at a time. Look at the use lib1 statements ... do they specify qw(subroutine names)? Is the lib1::subroutine nomenclature being used?
And, finally but most-importantly, “where and how does it hurt?” If you decided that you only had time and resources to commit to changing 20% of the occurrences, which 20% would it be, and why?
“Refactoring” kinda makes me nervous, like the billboards that I see advertising “iLipo-suction.” The billboards always feature a curvaceous model who obviously never required liposuction in her life, and promise that the procedure can be done “without downtime.” Well, I know of someone who died from liposuction: merely putting an “i-” in front of it does not make it risk-free, nor painless. And, likewise, I’ve seen some pretty-stable applications which became extremely unstable as a result of “refactoring” that sought to address an “issue” that, in the grand scheme, really didn’t exist. I would suggest approaching the task very deliberately, and manually, bearing in mind that the task must include a comprehensive test-suite to prove both present-state and future-state. Don’t attempt to refactor 100%. Seek the 20% that can be shown to make 80% of the business difference with 5% of the risk. Identify those pieces, and what you intend to do with them, and how you intend to prove the work, first.
| |
|
|