P is for Practical | |
PerlMonks |
Re: Best practices for modifying a file in place: q's about opening files, file locking, and using the rename functionby grep (Monsignor) |
on Nov 03, 2006 at 02:25 UTC ( [id://581993]=note: print w/replies, xml ) | Need Help?? |
Q1: In "The truly paranoid programmer would lock the file", which file are the authors referring to?
The $old file. This is data that would get clobbered (assuming you are not using the same name for the temp $new file). BTW I would name the files $orig and $tmp - that seems to make more sense.
Q2: Regarding the reason for being "truly paranoid" -- is this because we don't want another running instance of this script to be writing to $new while we are,
I'm not sure I completely understand the hazards of "clobbering." So...
There are 2 problems - UserA's changes only last a split second but generally the more important problem is UserB never saw changes UserA made.
Q3: Is the problem the fact that $new might exist already because another instance of this script running at the same time had created $new a split-second ago in connection with its own update of $old, and that our process will destroy the contents of that $new due to the way ">" works,
Q4: In a multi-user environment, does a careful programmer need to use "sysopen/flock LOCK_EX/truncate" every time a script needs to write a file? And now a final wrinkle on the addition of a file lock for $new in the recipe.
The flip side is - If your data is important, changed by more than one source, and changed often - Then you should generally use a full database that supports locking. This is why file locking is not a huge problem.
Q5: Wouldn't we would want to keep $new open (and hence the LOCK_EX in place) until after the "rename( $new, $old )"?
The best strategy IMO is to create a '.lock' file and flock that. Like this:
In Section
Seekers of Perl Wisdom
|
|