Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Re^4: Practical example of "Is Perl code maintainable"

by BrowserUk (Patriarch)
on Aug 14, 2007 at 14:24 UTC ( [id://632505]=note: print w/replies, xml ) Need Help??


in reply to Re^3: Practical example of "Is Perl code maintainable"
in thread Practical example of "Is Perl code maintainable"

Update: I don't know if I defended my position that well.

Actually I though you made your point very well, but there are situations where (IMO) it doesn't make sense to always completely unpack @_.

For example, in the following sub from my utils library:

sub rndStr{ join'', @_[ map{ rand @_ } 1 .. shift ] }

An equivalent might be:

sub rndStr{ my( $n, @chars ) = @_; return join'', @chars[ map{ int rand @chars } 1 .. $n ]; }

In use, that would mean the allocation and near immediate destruction of an additional, short-lived array the size of the input list. So when creating a 1GB file of random binary strings:

local $/; my @chars = map chr, 0 .. 255;; print rndStr 1024, @chars for 1 .. 2**20;;

it would allocate, destroy and reallocate 256 scalars a million times. For unicode, the list could be much bigger.

Why? Does the "additional clarity" of naming the parameters for such a short routine really save the follow on programmer such a huge amount of time, to make it worth it?

From my (mostly non-perl) experience of writing and maintaining production code, I find many proposals for measures and rules intended to "speed the task of the maintenance programmer" completely specious.

From my (considerable) time spent fulfilling the maintenance programmers role, including a 6-month stint maintaining a huge behemoth of a relational database management system (IBM DB2 1.x), written entirely in COBOL (of which, when I started, I had exactly six weeks (12 hours) of experience from collage about 15 years before), I know that the vast majority of the time is spent

  • understanding and reproducing the bug-report;
  • or understanding the request-for-change;
  • locating the affected files;
  • tracking down where the changes have to be made;
  • working out the implications of those changes and their affects upon the rest of the file(s), subsystems and the overall system;

than is ever spent actually making the modifications.

Even when working in a language with which you are almost completely unfamiliar (MicroFocus COBOL is so far removed and extended over the primitive version I learnt at collage in the early '80s, that it is, for all intents and purposes, a completely different language. It has pointers and dynamic memory allocation for dogs sake!), that the time spent understanding the syntax of individual lines of code, or even whole subroutines is very minor when compared to analysing the structure of the surrounding code and how the changes will affect it.

Indeed, I think that a certain amount of terseness and lack of clarity is a good thing. Anything that makes the follow on programmer stop and think and analyse rather than practicing 'hit&run' maintenance is a good thing in my book. About half of all the maintenance work I did on that rdbms was re-work. Fixing bugs introduced by previous maintenance. In most cases it meant locating and backing out the changes made by earlier 'bug-fixes', and then re-addressing them from scratch.

And I can hear the "why didn't your test suite detect the introduced bugs" posts being written, but there was a fully automated and very large regression test suite in place. And every maintenance change had to pass through those tests before the change could make it through the system. But with software systems the size and complexity of a rdbms, and given the infinitely variable nature of the inputs (SQL and DB schema's and 'the data', whatever it may be), it takes literally years to evolve your test suites to the point where all possible code paths are exercised.

And in truth, in such complex systems, with the codebase evolving in parallel to the production deployments (as I was maintaining DB2 1.x. DB2 2.0 was being (re-) written from the ground up in C), they frequently never do exercise the full range of possibilities. It's a simple commercial fact of life.

So, whilst your POV makes perfect sense in many cases, as with all thou shalts, there will always be exceptions where it simply does not make sense.

And that is pretty much the basis for all the contrary POV I express here. Blanket application of thou shalts ( and thou shalt nots also), can create as many problems as it hopes to address. Rules of thumb, guidelines and best practices should be quoted often--but with the acknowledgement that there are exceptions and, preferably, with mention of (some of) those exceptions sufficiently regularly to ensure that people are aware that it is okay to make their own judgements when warranted.

That's why I applaud Perl Best Practices, fear Perl::Critic and reserve the right to argue with and/or ignore both when my faculties of reason and judgement say I should.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://632505]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others studying the Monastery: (4)
As of 2024-04-16 22:15 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found