Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

Re: avoiding a race (read locks)

by tye (Sage)
on Sep 28, 2010 at 14:20 UTC ( [id://862427]=note: print w/replies, xml ) Need Help??


in reply to avoiding a race

You get a READ lock (LOCK_SH), read the file, make an initial determination as to whether you need to write to it. If that is "yes", then you release the read lock and request a write lock. When you get it, you read from the position in the file that was the previous end of the file and update your decision as to whether you need to write. If so, append your update. Then release the lock.

Update: Note that under other circumstances, this scheme has the potential for the classic problem of readers starving writers. If there is never a break in read locks getting held, then the request for a write lock will just wait forever. Given the schedule you outlined, it seems likely that all of the readers will finish before the next batch of readers start up. However, if your batches start taking 15 minutes to finish, then you might never get e-mail because the writers never get their locks.

You should check how Perl's flock() is implemented on your system. It may be that a pending request for a write lock will cause new requests for a read lock to block, preventing starvation.

You should also time out and send the e-mail if you can't get the write lock after, say, 15 minutes.

The next race is when you want to purge the growing accumulation of log lines. I'd probably just include the date and hour in the log file name. Then you only need to read this hour's and last hour's log files and you can delete log files for longer ago on whatever schedule you desire without worry.

- tye        

Replies are listed 'Best First'.
Re^2: avoiding a race (read locks)
by westy032001 (Novice) on Sep 28, 2010 at 15:31 UTC
    Thanks for the reply.

    If I understand you correctly, isnt there still a potential for a race condition ?

    If the database goes down and all 300 procs get a db error.

    process 123 open file and places a shared lock

    process 321 opens file and places a shared lock

    process 123 decides it is going to modify the file and so waits for 123 to unlock then places an exclusive lock modifies file and closes

    process 321 decides it is going to modify the file and so places an exclusive lock modifies file and closes.

    If both are changing the file as a result of the same error (i.e database is down) you will get 2 of the same error codes recorded. and 2 emails sent to admins .

    thanks.

      See the following sentence in tye's scheme:

      When you get [the write lock], you read from the position in the file that was the previous end of the file and update your decision as to whether you need to write.

      So, in your example case, Process 321 would notice that the file changed since it last checked and that another process already sent the notification.

        Thanks, Corion.

        I also should have pointed out that, in order to keep reading past what had previously been the end-of-file, you'll need to seek( $fh, 0, 0 ) (if you don't just re-open the file).

        - tye        

        aha ! Thank you !

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://862427]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (2)
As of 2024-04-26 04:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found