Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

Re: Super Find critic needed

by BrowserUk (Patriarch)
on Jun 30, 2003 at 12:42 UTC ( [id://270146]=note: print w/replies, xml ) Need Help??


in reply to Super Find critic needed

You should think seriously about what happens if your server crashes whilst processing lines 38 through 44 of your script.

At that point you have opened the file, read the contents, closed and re-opened the file for output. If the server crashes, or even just the process gets interrupted, you have blown away the disc copy of the file and only retain its contents in memory. Even assuming you have backups, trying to work out which script were correctly modified, which ones have yet to be modified and which one was in the process of being modified when the interuption occurs can be extremely tiresome.


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller


Replies are listed 'Best First'.
Re: Re: Super Find critic needed
by Anonymous Monk on Jun 30, 2003 at 12:46 UTC
    Okay if I am reading both of your comments correctly:
    if (open (F, ">$name")) { print F,$data; close(F); } #This part will be where I handle a server c +rash?? else { warn "...SERVER ERROR ETC.." }
    Please advise how I would handle a server problem/crash?

      By "server crash", I meant the machine (server or workstation) where the code is running, stops because of hardware failure, or power failure, or you knock the off-switch, or even because an administrator accidently kills your process while cleaning up zombies at 4 am. Ie. Events that you cannot detect from within your script.

      The basic mechanism to avoid this is to make a backup of the original before overwirting it with the modified version. The are several different sequences of copying, renaming, deleting and overwriting that you can use. Some of these are "better" than others, but I've yet to see one that completly eliminates the risks, though they reduce the window for failure to the point of reasonable risk.

      For a couple of neat way to use $^I (see Perlvar and perlrun -i) to get perl to backup your files for you, see My pattern for "in-place edit" for many files and Put your inplace-edit backup files into a subdir, both from Merlyn.

      Perhaps the best way to be safe, is to make a copy of the directory structure, run your script against that, and then copy the directory structure over the original when your sure it has been successful. Perhaps you are already doing (or intending to do) this, in which case you can ignore this advise.

      The other thing I noticed in your script is that every file will be over written regardless of whether any actual changes were made or not. This is likely to give you problems when you come to verify that the changes made where correct, or worse, make it hard to undo any mistakes as you won't know whether 1 file or every file was changed.

      There are many, many ways of writing your script, and many different philosophies on the best way to do it. Perhaps the best advice I can give you, is to sit down with your code, mentally or physically on paper, work through each step of the process and imagine what state your files will be left in if a powercut occurs at each step. Decide how much of a risk that presents in your system, and how much effort you should expend to prevent it from happening.


      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller


        Some of these are "better" than others, but I've yet to see one that completly eliminates the risks, though they reduce the window for failure to the point of reasonable risk.

        I would (perhaps naively) think that renaming the original file (renaming should be atomic, no?) to something like "$filename.$$", then reading / munging / writing to "$filename", and only deleting "$filename.$$" when the new filehandle is closed (and thus their buffers flushed as well as Perl can make them) would completely eliminate the risk. The process could stop at any point and, at worst, you'd have a partially munged new file and the original file both existant. Assuming, of course, that you have a sufficiently paranoid filesystem.

        I'm not entirely sure I'm not missing something, so please enlighten me if I am. =)

        bbfu
        Black flowers blossom
        Fearless on my breath

        Yes, I am copying the entire directory and its contents before running the script.

        Also reference what you said:
        "The other thing I noticed in your script is that every file will be over written regardless of whether any actual changes were made or not. This is likely to give you problems when you come to verify that the changes made where correct, or worse, make it hard to undo any mistakes as you won't know whether 1 file or every file was changed"

        How would I change it so it just writes on files I am changing??
      replace ">$name" by (e.g.) ">$name.$$" and do a rename "$name.$$",$name after the close(F).

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://270146]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (5)
As of 2024-04-24 08:40 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found