Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

singleton lock not reliable

by pidloop (Novice)
on Jun 23, 2021 at 18:15 UTC ( [id://11134206]=perlquestion: print w/replies, xml ) Need Help??

pidloop has asked for the wisdom of the Perl Monks concerning the following question:

Oh great Monks, I am having an issue ensuring only one instance of my script runs at a time. Any help appreciated. This is a snippet version showing my technique:
use strict; use warnings; use Fcntl ':flock'; # only one of us open my $ME, '<', $0 or die "Couldn't open self: $!"; flock $ME, LOCK_EX | LOCK_NB or exit; sleep;
This program is run every minute from a crontab. Doing a ps I expect to see exactly one instance. However, after a random period ranging from minutes to months I find two instances are running. I don't think it's an OS issue because I am seeing this on three different systems: centos 6 linux, macOS 11.4 and FreeBSD 12.2-RELEASE. So I'm actually hoping there is something wrong with this technique so I can fix it or there is a better way. Thanks for your time and wisdom.

Replies are listed 'Best First'.
Re: singleton lock not reliable
by choroba (Cardinal) on Jun 23, 2021 at 18:38 UTC
    Of course you can see two instances in ps output. One instance holds the lock and sleeps, the other one has just started and hasn't yet tried to obtain the lock.

    map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]

      In other words, the lock doesn't prevent multiple instances of the script from running. It just prevents all but one of them from reaching the sleep.

      Seeking work! You can reach me at ikegami@adaelis.com

Re: singleton lock not reliable
by stevieb (Canon) on Jun 25, 2021 at 01:36 UTC

    First thing I thought of was shared memory. In IPC::Shareable, if one attempts to create a shared memory segment that has already been created in a separate process with exclusive set, the second process will croak(). So what I did is add a graceful flag to its options, and when set, the second process that tries to create the exclusive segment simply exits gracefully with no noise or anything. Observe:

    lock.pl:

    use warnings; use strict; use IPC::Shareable; tie my $lock, 'IPC::Shareable', { key => 'LOCK', create => 1, exclusive => 1, destroy => 1, graceful => 1 }; $lock = $$; print "procID: $lock\n"; sleep 5;

    Run it in one window:

    spek@scelia ~/scratch $ perl lock.pl procID: 21241

    ...it sleeps for a few seconds, during which we run it in the second window:

    spek@scelia ~/repos/ipc-shareable $ perl ~/scratch/lock.pl spek@scelia ~/repos/ipc-shareable $

    ...it exited before it printed anything. After the five second sleep in proc one is done, run it in window two:

    spek@scelia ~/repos/ipc-shareable $ perl ~/scratch/lock.pl procID: 21339

    So, in essence, this question prompted me to update the distribution to handle your very issue, ummm, well, gracefully. I just published it, so it may not yet be available at your mirror. Version 1.01 has the new 'graceful' feature.

      I've done one better. I've added a singleton() method. Its specifically designed to prevent more than one instance of any script (or processes that share the same shared memory segment). It's trivial to use, as it hides all of the various flags:

      use IPC::Shareable; IPC::Shareable->singleton('LOCK'); # Do scripty perl stuff here

      That's it. You can now be guaranteed that if a second instance of the script starts, it'll exit gracefully immediately as soon as singleton() is called.

      You can also tell it to emit a notice that the script is exiting by sending in a true value as the second 'warn' param:

      IPC::Shareable->singleton('LOCK', 1);

      If a second instance is run, the following will be thrown:

      Process ID 14784 exited due to exclusive shared memory collision

      Version 1.03 has this update.

        Good, better, best!

        I released a new distribution, Script::Singleton that ensures only a single instance of a script can run at any time. No methods or flags to use, you simply:

        use Script::Singleton 'LOCK';

        That's it! LOCK is the glue/key that identifies the shared memory segment. As with my last update, send in a true value to get a warning if a second instance of a script tries to run but has to exit:

        use Script::Singleton 'LOCK', 1;
Re: singleton lock not reliable
by Fletch (Bishop) on Jun 23, 2021 at 21:45 UTC

    As a debugging aid (since you're using ps to inspect things) if your OS allows you might change $0 to add extra info.

    ## Prelude use'es as before my $original_name = $0; $0 = qq{$original_name($$): TRYING TO GET LOCK}; ## Your open and flock lines . . . $0 = qq{$original_name($$): HOLDING FLOCK}; sleep;

    The cake is a lie.
    The cake is a lie.
    The cake is a lie.

Re: singleton lock not reliable
by eyepopslikeamosquito (Archbishop) on Jun 23, 2021 at 22:16 UTC
Re: singleton lock not reliable
by hippo (Bishop) on Jun 23, 2021 at 22:22 UTC

    You are trying to obtain an exclusive lock on a filehandle that you've opened for reading. I am no expert on file locking but I don't think that's possible. The FAQ is explicit is stating that the filehandle must be opened for writing (or appending or read+write) when using lockf at least. Perhaps that is the cause of your problem. The canonical example in the Monastery is Highlander - allow only one invocation at a time of an expensive CGI script and that also uses a writing filehandle.

    I suggest you try switching to a writing filehandle (obviously not on $0) and see if that enables the lock for you reliably across your various OSes.


    🦛

      > The FAQ is explicit is stating that the filehandle must be opened for writing (or appending or read+write)

      it depends on the implementation, the perldocs refer to multiple different OS functions potentially used for flock and say

      lockf(3) does not provide shared locking, and requires that the filehandle be open for writing (or appending, or read/writing).

      So yes, it's possible that some Perl ports will fail when opening with < , if Perl was compiled to use lockf

      Personally I'm doing it deliberately on WIN (where it works) to make sure my colleagues understand that the lockfile is an empty semaphore only. (Opening for writing with > has it's own hazards, because the content will be deleted each time.)

      It really depends on the OS and FS and should be tested for each combination.

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      Wikisyntax for the Monastery

      I am no expert on file locking but I don't think that's possible.

      It is.

      $ touch lock $ perl -M5.010 -MFcntl=LOCK_EX -e' open(my $fh, "<", "lock") or die $!; flock($fh, LOCK_EX) or die $!; say "ok"; ' ok

      flock locks have nothing to do with reading and writing, and they prevent neither. They merely prevent locks from being obtained.

      Seeking work! You can reach me at ikegami@adaelis.com

Re: singleton lock not reliable
by LanX (Saint) on Jun 24, 2021 at 08:45 UTC
    Some remarks
    • Your lockfile's path will be relative to your current working directory. I'd suggest using an absolute path to avoid miscommunications.
    • Your sleep is called without argument, which means it'll sleep forever.
    • Are you sure that a parallel instance should exit? Without LOCK_NB it would wait till the lock is free.
    • You are not using parens for your arguments for flock(), precedence issues with | would make me nervous here. I'd need to look it up...
      it should work in this case tho.
    Regarding debugging

    You shouldn't rely on ps for debugging, append with >> to (another) absolute_path/logfile when your script is

    • started,
    • exiting,
    • working,
    • stopped,
    including process ID and timestamp.

    Please come back if you can really reproduce a problem.

    Maybe of help:

    File lock demo

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery

Re: singleton lock not reliable
by NERDVANA (Deacon) on Jun 23, 2021 at 23:32 UTC
    One entirely different solution is to move to some kind of process manager (who will guarantee that only one instance runs at a time) and then change your cron entry to tell the process manager to start the task. If the task is already running, the process manager will ignore the redundant request.

    For example, if you use docker (which is a process manager in addition to a container manager) you could create a container that starts, performs the job of the script once, and exits. Then your cron command would be “docker start my-task-name”. Other process managers I like are perp, and runit.

      crontab? File locks? docker???

      Writing a simple cyclic thingy isn't that hard. Let's say you have a program that calculates the meaning of life, the universe and everything. It takes a varying time to execute. You want to call it every ten seconds or so, except when it takes longer to run. First, we need the program:

      #!/usr/bin/env perl use strict; use warnings; sleep(int(rand(10) + 5)); print "42\n";

      Then we write a cyclic executive that keeps track of the run time of the external program (=the scheduler):

      #!/usr/bin/env perl use strict; use warnings; use Time::HiRes qw(time sleep); my $cmd = "./meaningoflifetheuniverseandeverything.pl"; my $cycletime = 10; while(1) { my $starttime = time; `$cmd`; my $endtime = time; my $timetaken = $endtime - $starttime; if($timetaken >= $cycletime) { print "Immediate restart\n"; next; } my $sleeptime = $cycletime - $timetaken; print "Sleeping for $sleeptime\n"; sleep($sleeptime); }

      This effectively implements a simple "minimum cycle time" scheduler:

      Sleeping for 0.986798048019409 Sleeping for 4.98646092414856 Sleeping for 3.98296499252319 Immediate restart Immediate restart Immediate restart Immediate restart

      If you want a more crontab like behaviour with start times aligned to specific times, you can use take the current time, calculate the modulos (division remainder) for the cycle time. Then calculate difference between that and a full cycle time interval how long to sleep. Like this:

      #!/usr/bin/env perl use strict; use warnings; use Time::HiRes qw(time sleep); my $cmd = "./meaningoflifetheuniverseandeverything.pl"; my $cycletime = 10; while(1) { my $endtime = time; my $sleeptime = $cycletime - ($endtime % $cycletime) - 1; if($sleeptime) { print "Sleeping for $sleeptime\n"; sleep($sleeptime); } `$cmd`; }

      Result:

      Sleeping for 9 Sleeping for 5 Sleeping for 9 Sleeping for 6 Sleeping for 6 Sleeping for 7

      This has a jitter of up to one second, but that should be acceptable enough, i think. Depending on your required time zone settings, you also might have to tweak the whole thing by about 37 seconds, plus/minus a couple of leap seconds every few years, something like this:

      my $TAIoffset = -37; ... my $endtime = time + $TAIoffset;

      But that's probably overkill for most purposes.

      perl -e 'use Crypt::Digest::SHA256 qw[sha256_hex]; print substr(sha256_hex("the Answer To Life, The Universe And Everything"), 6, 2), "\n";'
        > Writing a simple cyclic thingy isn't that hard.

        One of the advantages of cron is it's restarted when the machine is restarted or when it crashes for whatever reason. Instead of trying to achieve the same with a cyclic thingy, I'd rather go with crontab.

        map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]
Re: singleton lock not reliable
by davido (Cardinal) on Jun 25, 2021 at 22:23 UTC

    I keep coming back to my first thought, which I dismiss as "that's too easy." But since it keeps entering my mind I'll mention it:

    What if you have two instances of the script itself on your filesystem? Is it possible that your cron invokes a script at one path, and your testing invokes a script in ~/bin/ for example? They would be separate files, so totally different locks.

    I did toy with the relative path consideration, but could never reproduce a situation where invocations from different working directories (hence, different relative paths, but the same absolute path) would result in different locks. If there's only one file that you're running, it's going to be the same thing being locked.


    Dave

      I did toy with the relative path consideration, but could never reproduce a situation where invocations from different working directories (hence, different relative paths, but the same absolute path) would result in different locks. If there's only one file that you're running, it's going to be the same thing being locked.

      After all, the OS is supposed to resolve the paths to a file, probably a device ID and an inode on Unixes. So all ways to name the same file should end up with the same file.

      I think there is a way to obtain two different locks on a single file. It is a very constructed example, and quite silly to use in the real world. The trick is to make the operating system on our machine think we would try to lock two different files.

      Let's assume that locking "just works" over the net for every single protocol used below. In the real world, this is not always the case.

      Let's put the file to be locked on a fileserver. Export the filesystem on which the file is stored via some protocol to our computer. Locking still works fine with this setup. Now, choose a second protocol to mount the same server filesystem again on a different mount point. Using only the second mount point, locking should still work fine.

      Now, what happens when we use both mount points, i.e. both protocols, to try to lock our file?

      Ideally, both protocol implementations would just transparently pass any locking request to the server's operating system. Nothing would change, except for some overhead, so only one process on our computer could obtain a lock.

      But if at least one protocol implementations implement their own file locking mechanism, and we would try to lock the file using both mount points, neither of the protocol implementation would know about the other one's locks, and so two independent locks could be obtained for the same file.

      The same construct should also work if one protocol is local filesystem access and the other one is a network protocol via the loopback interface that implements its own locking. It should also work with a user-space filesystem implementation (e.g. FUSE) that implements its own locking on top of a local filesystem in place of the network protocol via the loopback interface.

      Alexander

      --
      Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
Re: singleton lock not reliable
by pidloop (Novice) on Jul 17, 2021 at 00:54 UTC
    Many thanks to all who took the time to respond. Key seems to be using a writable lock file. The following version has been running over a month with no dups so I'll call it good.
    # only one of us my $LOCKFN = "/tmp/x.mylock"; open my $LF, '>', $LOCKFN or die "Can not create $LOCKFN: $!"; flock ($LF, LOCK_EX | LOCK_NB) or exit;

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://11134206]
Approved by hippo
Front-paged by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others scrutinizing the Monastery: (4)
As of 2024-03-28 21:15 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found