Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re^2: Analyzing large Perl code base.

by dmitri (Priest)
on Apr 15, 2005 at 18:10 UTC ( [id://448288]=note: print w/replies, xml ) Need Help??


in reply to Re: Analyzing large Perl code base.
in thread Analyzing large Perl code base.

2.5 megs of Perl code across 237 different modules, I will need a lot of coffee :-)

Thanks for your suggestions.

Replies are listed 'Best First'.
Re^3: Analyzing large Perl code base.
by dragonchild (Archbishop) on Apr 15, 2005 at 18:15 UTC
    At least you have modules. You should be able to organize those modules into logical groupings. Once you do that, focus on one grouping at a time, writing lots and lots and LOTS of tests. Test everything, anything ... if it moves, test it. Heck, test it even if it doesn't move. (You want to make sure it doesn't start moving!)

    Note: you will find that many of your tests will be wrong ... and that's good. :-)

    Update: As adrianh says, you shouldn't write whitebox tests - you should be writing tests for how the rest of the code expects your modules to work. In other words, API tests. Remember - you're planning on ripping the guts out ASAP. You just want to make sure that the rest of the code doesn't die while you're working.

      Once you do that, focus on one grouping at a time, writing lots and lots and LOTS of tests. Test everything, anything ... if it moves, test it. Heck, test it even if it doesn't move. (You want to make sure it doesn't start moving!)

      While I don't think this is what you're proposing - I think this could be read as "write tests for everything in the legacy code before you change anything" which, IMHO, is a bad practice. As I said here

      A counter productive practice that I've seen is to go through a large piece of legacy code and add developer tests for everything. Doing this with legacy code not driven by tests will produce a test suite that is brittle in the face of change. When you get to the refactoring you're going to find that you're going to be continually throwing away a lot of the new tests so you don't get any benefit from them.

      In my experience it's much more effective to build the test suite around the changes you make to the code. Add tests when you add new features. Add tests around code that you're refactoring. Add tests to demonstrate bugs. In my experience just following those three rules naturallys build a test suite around the most important code.

        You're absolutely correct - my post was lacking. It's been updated.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://448288]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (4)
As of 2024-04-23 06:09 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found