![]() |
|
We don't bite newbies here... much | |
PerlMonks |
comment on |
( #3333=superdoc: print w/replies, xml ) | Need Help?? |
You get a READ lock (LOCK_SH), read the file, make an initial determination as to whether you need to write to it. If that is "yes", then you release the read lock and request a write lock. When you get it, you read from the position in the file that was the previous end of the file and update your decision as to whether you need to write. If so, append your update. Then release the lock. Update: Note that under other circumstances, this scheme has the potential for the classic problem of readers starving writers. If there is never a break in read locks getting held, then the request for a write lock will just wait forever. Given the schedule you outlined, it seems likely that all of the readers will finish before the next batch of readers start up. However, if your batches start taking 15 minutes to finish, then you might never get e-mail because the writers never get their locks. You should check how Perl's flock() is implemented on your system. It may be that a pending request for a write lock will cause new requests for a read lock to block, preventing starvation. You should also time out and send the e-mail if you can't get the write lock after, say, 15 minutes. The next race is when you want to purge the growing accumulation of log lines. I'd probably just include the date and hour in the log file name. Then you only need to read this hour's and last hour's log files and you can delete log files for longer ago on whatever schedule you desire without worry. - tye In reply to Re: avoiding a race (read locks)
by tye
|
|