Re: Easiest way to protect process from duplication.
by cguevara (Vicar) on Jan 20, 2012 at 18:07 UTC
|
| [reply] |
|
Great tip! Proc::PID::File has the slight disadvantage, that it does not ship with the standard -- at least not where I looked -- so it requires managing CPAN installs.
| [reply] |
Re: Easiest way to protect process from duplication.
by ww (Archbishop) on Jan 20, 2012 at 17:42 UTC
|
So why not code a test right into your script (maybe in a BEGIN{...} block) using Proc::PID::File?
After all, that's the very first example in the Synopsis!
- - for posing a question without -- it appears -- even minimal effort on your part. | [reply] [d/l] |
Re: Easiest way to protect process from duplication.
by JavaFan (Canon) on Jan 20, 2012 at 20:23 UTC
|
use Fcntl ':flock';
open my $fh, "+<", $0 or exit;
exit unless flock $0, LOCK_EX | LOCK_NB;
Note that depending on your OS and how you call the program, a second open may fail (and then, the flock isn't necessary). | [reply] [d/l] |
|
Here's a variation that uses the __DATA__ handle.
use Fcntl qw(LOCK_EX LOCK_NB);
die "Another instance is already running" unless flock DATA, LOCK_EX|L
+OCK_NB;
| [reply] [d/l] |
|
That actually requires a __DATA__ (or __END__) token to be present.
| [reply] [d/l] [select] |
|
|
Thank you for your post, that is what I really need.
| [reply] |
|
Thanks for your attention, everyone. Program supposed to run in a background. (I mean # test.pl &) so is it changing something ?
| [reply] |
|
| [reply] |
Re: Easiest way to protect process from duplication.
by mbethke (Hermit) on Jan 20, 2012 at 17:51 UTC
|
You mean Proc::PID::File? Have a look at File::Lockfile but it's not that much easier.
use Proc::PID::File;
die "Already running!" if Proc::PID::File->running();
Doesn't get much easier than that, does it?
Edit: ups, ww beat me to it :) | [reply] [d/l] |
Re: Easiest way to protect process from duplication.
by thospel (Hermit) on Jan 20, 2012 at 20:21 UTC
|
| [reply] [d/l] |
|
| |
|
Suppose the sequence in the program is:
open
lock
unlink
exit
The unlink comes before any unlock or close. Otherwise you get even more race scenarios (on Windows you must actually close before being able to delete)
The open is an open with create (O_CREAT), otherwise unlinking makes the next program invocation fail but without exclusive (O_EXCL) otherwise we are getting into a different locking system (with even more problems).This type of open is what you get if you do a plain open($fh, ">", $file) in perl.
Now you can get as sequence:
process A: open (and create)
process A: lock
process B: open (same file so no create)
process A: unlink
process A: exit (implicit unlock)
process B: lock (on the file A just deleted since B still has an open
+handle on it)
process C: open (and create a new file with the old path name)
process C: lock (on the new file)
Now process B and C are running simultaneously with locks on different files of which only one is visible in the filesystem
| [reply] [d/l] [select] |
Re: Easiest way to protect process from duplication.
by scorpio17 (Canon) on Jan 20, 2012 at 20:18 UTC
|
| [reply] |