yulivee07 has asked for the wisdom of the Perl Monks concerning the following question:
I inherited a really large legacy codebase which I am meant to maintain for the future. I have been working with this codebase for a while now and noticed by scrolling through the code that there is code duplication (especially the same subroutines) in many cases. I would like to make modules for all that duplicate functionality, but first I am looking for a way to find the code duplication.
I have the perl-sourcefiles for analysis. My codebase consists of ~60 deamons with 3000-6000 lines of code, so diffing all deamons against each other isn't really a practical way of approaching the problem. I was told that B::XRef may be a way to identify duplicate subroutines. Do you have additional suggestions what I can do in a situation like this?
Kind regards,
yulivee
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Searching for duplication in legacy code (refactoring strategy)
by LanX (Sage) on Nov 23, 2016 at 11:19 UTC | |
It depends on the nature of the duplication. Do equally named subs have identical code? Cut&paste programing involves mutations.
General approach for refactoring
a) identify all sub definitions in a filePossible Tools
b) identify their dependenciesc) normalize sub codeFormatting can differ d) diff potentially equal subs to measure similiarityWhat "potentially" means depends on the quality your code. probably changes happened to
e) try to visualize dependencies to decide where best to startlike with grapviz or a tree structure
f) create a test suite to assure refactoring quality
g) start refactoring incrementally, while constantly testing the out comedepending on the quality of your tests you might first start with only one demon in production.
h) care about a fall back scenarioEspecially use version control!
ConclusionSorry, very general tips, because it really depends on the structure of your legacy code. Probably grep is already enough... (Think about it, you might also need "nested refactoring" because new modules still have duplicated code and need using other modules and so on)
researchI did some googling yesterday after our conversation for "refactoring" and "duplication" and the term "plagiarism detection" popped up. like in these discussions: Couldn't find a general refactoring project for Perl, but also didn't spend much time yet. I think to cover all edge cases of a worst case scenario one certainly would need the use of PPI ( at least) or even a patched B::Deparse to scan the Op-Tree with PadWalker to identify variable dependencies and side effects. HTH! :)
Cheers Rolf
| [reply] |
Re: Searching for duplication in legacy code (updated)
by haukex (Archbishop) on Nov 23, 2016 at 11:26 UTC | |
Hi yulivee07, I've done a bit of work with PPI and there is a chance it could be useful to you. This was an interesting question to me so I went off and whipped something up (Update: that means please consider this a beta) that finds identical subs, perhaps it's useful to you. PPI could also be used for more powerful identification of duplicated code. Read more... (3 kB)
Output:
Hope this helps, | [reply] [d/l] [select] |
Re: Searching for duplication in legacy code (updated)
by stevieb (Canon) on Nov 23, 2016 at 13:50 UTC | |
My Devel::Examine::Subs can help with some of this. It uses PPI behind the scenes. It can gather all subs in a file, or a whole directory, then list all subs in all those files. It can even examine each sub and collect only ones that have lines containing specified search patterns, print out which lines each sub starts/ends, and also how many lines are in each sub. Collect and display all subs in all files in the current working directory:
Sample output:
Get all the subs in the same manner, but collect them as objects instead to get a lot more information on each one:
Sample output:
The main reason I wrote this software is so that I could introspect subs accurately, and then if necessary insert code in specific subs at either a line number or search term (yes, this distribution does that as well). You can even search for specific lines in each sub, and print out the line numbers those search patterns appear on. Of course, using the above techniques, it would be trivial to filter out which files have duplicated subs, stash all the duplicate names (along with file name) then using the objects, compare the length of the subs to do a cursory check to see if they appear to be an exact copy/paste (if the number of lines are the same). The synopsis in the docs explain how to get the objects within a hash, so that the hash's key is the sub's name. This may make things easier. update: I forgot to mention that each subroutine object also contains the full code for the sub in $sub->code. This should help tremendously in programmatically comparing a sub from one file to the dup sub in another file. | [reply] [d/l] [select] |
Re: Searching for duplication in legacy code (ctags and static parsing)
by LanX (Sage) on Nov 23, 2016 at 18:18 UTC | |
I can't test at the moment, but I suppose after all these years it's well tested by now. But please keep in mind that this (like PPI) does static parsing (and Only Perl can parse Perl). Approaches like B::Xref do compile the code (i.e. let Perl parse Perl) before inspecting it. * See for instance How to find all the available functions in a file or methods in a module? for a list of edge cases where static parsing fails. ² Again just for completeness, from your description I suppose that static parsing is sufficient for you, but maybe you should be aware of the limitations. HTH! :)
Cheers Rolf
*) with the drawback that compiling can already have the side effects of running code, while static parsing is "safe". ²) or Re^3: Perl not BNF-able?? with some limitations listed by adamk, who is PPI's author. | [reply] |
Re: Searching for duplication in legacy code
by stevieb (Canon) on Nov 23, 2016 at 21:44 UTC | |
Perhaps another thing that can help you sort out what is calling each sub from where is to enable stack tracing in all of your subs. Normally I wouldn't go so far away from the original question, but looking at my module this morning had me testing a few others so I thought I'd throw it out there in hopes it can help in some way. I wrote Devel::Trace::Subs to do this tracing. It uses Devel::Examine::Subs in the background (in fact, I wrote Devel::Examine::Subs originally specifically to be used by this module). It is intrusive... it injects a command into every single sub within specified files (both inserting and removing is done with a single command line string). Here's an example where I configure every Perl file in my Mock::Sub directory to save trace information (there's only one file in this case, but I still just use the current working directory as the 'file' param. Configure all files (make a backup copy of your directory first!): perl -MDevel::Trace::Subs=install_trace -e 'install_trace(file => ".");'In my case, I install my distribution, but that may not be your case if scripts just access the libraries where they sit. Here's an example script that uses the module that now has tracing capabilities:
The only parts of interest are the use Devel::Trace::Subs ... line, the $ENV{DTS_ENABLE} =1; line which enables the tracing, and the trace_dump(); line which dumps the trace data. The Mock::Sub stuff and everything else is irrelevant, it's just an example of normal code flow using other modules. Here is the output of the trace_dump():
in: is the sub currently being executed. The rest of the info is the caller of that sub. After you're done, you can remove tracing just as easily: perl -MDevel::Trace::Subs=remove_trace -e 'remove_trace(file => ".");'In the above example, there's only a single library. If the directory had several, you'd see the calls between the different modules in the proper order. | [reply] [d/l] [select] |
Re: Searching for duplication in legacy code
by cguevara (Vicar) on Nov 23, 2016 at 19:53 UTC | |
Check out http://blogs.perl.org/users/ovid/2012/12/more-on-finding-duplicate-code-in-perl.html and the previous http://blogs.perl.org/users/ovid/2012/12/finding-duplicate-code-in-perl.html . The latest version is at Code-CutNPaste . | [reply] |
Re: Searching for duplication in legacy code
by fishy (Friar) on Nov 23, 2016 at 20:11 UTC | |
Since LanX mentioned "plagiarism detection", here something interesting: Finding cheaters with k-mersHave fun! | [reply] |
Re: Searching for duplication in legacy code
by duyet (Friar) on Nov 23, 2016 at 10:20 UTC | |
| [reply] [d/l] |
Re: Searching for duplication in legacy code
by 1nickt (Canon) on Nov 23, 2016 at 11:12 UTC | |
As I am reading this the CPAN nodelet to the right shows recent upgrades to Class::Inspector, which has methods for examining the functions or methods in a loaded or other class, one of which will return "a reference to an array of CODE refs of the functions", which seems like it might be something to start with.
The way forward always starts with a minimal test.
| [reply] |
Re: Searching for duplication in legacy code
by hexcoder (Deacon) on Sep 15, 2017 at 21:43 UTC | |
I wrote a text duplication checker (see Code::DRY), which uses suffix arrays for performance. It has no special knowledge of Perl or units like subs, but it can find duplicated lines quite fast. You would need a C compiler to build the libraries, but then as memory permits you can scan whole directory trees for duplicates. I once planned to use it for a refactoring tool, but first wanted to implement the option to find structural duplicates (e.g. in token streams), where I got stuck... Hope this helps, hexcoder | [reply] |