There's more than one way to do things | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
That's a lot of questions... ;) And I was slow to respond, so I'm mostly reiterating what grep said. But let me back up a bit:
The file to be modified in my case is an important flat-file database (one record per line); web users can use a CGI script to either add data to the file or to edit their own records; I'm concerned about possible file corruption when two or more users are submitting new or revised data at about the same instant. In that sort of scenario, there are a couple things to watch out for:
Obviously, the first scenario is the one you really should worry about. It's not just a matter of using flock on the file; in fact, the more I think about it, the more unsuitable flock seems to be for web-based stuff. If you solve the first problem, the second one is a moot point. As the first reply points out, you need some sort of "check-out/check-in" mechanism to keep different users from stepping on each other's updates. A user needs to explicitly request write access to the data file, and when your cgi script services that request, it has to know whether someone else has already been given write access. And that's where you need to resolve any possible race condition: any given thread either gets the access (thereby blocking others), or else fails to do so because it is currently granted to someone else. For this purpose, checking for the existence of some "access.lock" file and creating it if it does not exist is almost atomic enough -- something like: (The truly paranoid programmer will find a chink there, and will hopefully offer the correct way to seal it up tight.) But web interactions being what they are, you also need a policy: some upper bound on how long a client may hold the access lock. If Bob does a check-out at 10:00am and tries to upload his update at 10:00pm, it might be prudent to tell him at that point that he waited to long to submit the update and please try again using a fresh download (and please try to return it more quickly). Or the policy could be more flexible: client may keep the lock up at least N minutes, or until someone else requests the lock after the minimum N minutes have passed -- that is, another client can "steal" the lock if it's more than N minutes old.
I know I could use a real database but I really want to figure out file locking using Perl. Seems like this issue must come up all the time in a multiuser environment, whether web or internal network. It's good to make sure you understand file locking, even if it doesn't exactly apply to the current task. And yes, it's an old topic. Consider this old node, drawn from an even older article by Sean Burke, published in The Perl Journal back in 2001 (and sadly hard to find these days). Meanwhile, get started on using a real database for your current web app. In reply to Re: Best practices for modifying a file in place: q's about opening files, file locking, and using the rename function
by graff
|
|