Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Re: Win32: Setting a layer with binmode causes problem with close() on Windows (PerlIO silently fails to close the file)

by BrowserUk (Patriarch)
on Jun 17, 2013 at 10:39 UTC ( [id://1039328]=note: print w/replies, xml ) Need Help??


in reply to Win32: Setting a layer with binmode causes problem with close() on Windows

First up. PerlIO layers are definitely a part of this problem. Commenting out the binmode makes it go away (as you already know>).

But, it is (much) more complicated than that. At the point where the unlink fails, (at least) two processes are hanging on to handles to that file:

C:\test>junk44 Permission denied : The process cannot access the file because it is b +eing used by another process at C:\test\junk44.pl line 30. perl.exe pid: 17320 PB-IM2525-AIO\HomeAdmin : 60: File (RW-) C:\ +test\ttz cmd.exe pid: 17324 PB-IM2525-AIO\HomeAdmin : 60: File (RW-) C:\t +est\ttz handle.exe pid: 16152 PB-IM2525-AIO\HomeAdmin : 60: File (RW-) C +:\test\ttz
  1. perl.exe is the one running the script.
  2. handle.exe is the process that is doing this discovery.
  3. cmd.exe is (one of) the shell that was used to run the echo command to create the file.

    Further muddying the waters here is your prefixing the command you want to run with 'cmd /c'.

    Because the system code detects that you are using a shell metachar '>' in the command, it automatically prefixes the command you supply with 'cmd.exe /x/d/c'.

    So the actual command being run is:

    cmd.exe /x/d/c "cmd /c echo xx >$file"

    Doing away with that doesn't fix the problem, but it makes it less complex.

Also, running the command to create the file from within the script is confusing things and there is no need for it.

This simplified version of your script:

use strict; use warnings; my $file='ttz'; open( my $fh, $file ) or die "open error $!"; binmode( $fh, ':unix'); close($fh) or die "close error $!"; if( !unlink($file) ) { warn $!, ' : ', $^E; }

exhibits exactly the same behaviour when the file is pre-created:

## In a different session from which I will run my modified version of + your script C:\test>echo xx > ttz C:\test>handle | find "ttz" ## shows that immediately after creation, + nothing has an open handle to that file ## Now in the other session C:\test>junk44 Permission denied : The process cannot access the file because it is b +eing used by another process at C:\test\junk44.pl line 12. ## And back in the first session whilst the 10 second sleep is running C:\test>handle | find "ttz" 60: File (RW-) C:\test\ttz

Only one process has a handle to the file, and that process is Perl itself.

(Tentative) Conclusion: The error message is wrong, or at least, misleading. The "other process" that is preventing the unlink, is actually the same process that is trying to perform the unlink.

Essentially, the close has failed (silently), or has simply not been enacted, and so the unlink cannot proceed because there is an open handle to the file.

Tracking this further means delving into IO layers ... why did the close fail silently?


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re: Win32: Setting a layer with binmode causes problem with close() on Windows (PerlIO silently fails to close the file)
  • Select or Download Code

Replies are listed 'Best First'.
Re^2: Win32: Setting a layer with binmode causes problem with close() on Windows (PerlIO silently fails to close the file)
by rovf (Priest) on Jun 17, 2013 at 11:16 UTC
    Good analysis, but I think you have it wrong in one point: At the time when the unlink fails, no other processes are running which have a hold on the file: The one which had created the file is not running anymore (since system waits for the child process to finish), and - just for completion - the process deleting the file has not started yet.

    BTW, Both system calls don't exist in my original code in this way (in my application, the file is created on a Unix host asynchroniously, and read and deleted from the Windows process). I have introduced them in the example for the following reason:

    • I wanted to create the file by a separate process, to make sure that my Perl program "has not seen" this file before, to make the situation more similar to my original application.
    • After the unlink fails, I added an explicit cmd /c del..., because the error message was then clearer than what was stored on $!. In hindsight, I probably could have output $^E instead

    -- 
    Ronald Fischer <ynnor@mm.st>
      I think you have it wrong in one point: At the time when the unlink fails, no other processes are running which have a hold on the file:

      Here's the problem. You know the way you have to fork twice under *nix in order to deamonise a process -- the first fork inherits loads of handles (stdin, stdout, stderr etc.) from its parent; so you then close them and fork again to get a process that true independent of its parent -- well similar things can happen under windows.

      system starts a new process that inherits lots of stuff from it parent. When it dies, if the parent is still around, many of those shared (duped) handles have to be retained within the kernel -- waiting for all their duplicates to be marked for delete -- and despite that the process has been removed from the system scheduler, those retained, open, shared handles will still be attributed to that now defunct process. So, the fact that system has returned does not mean all of it resources have been cleaned up.

      My simplified version of the test script simply removes all of those possibilities and demonstrates that the only process that could have a handle to the file is the perl process itself. Which is then verified using an external tool (handle.exe).

      Thus it is the close that is failing silently.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        So, the fact that system has returned does not mean all of it resources have been cleaned up.
        Oh my! I didn't think about this issue! But this would mean that it is unsafe to have an (external) process create a file, and then use it in my program - at least under Windows, which is very picky about this kind of stuff? If I understand you right, this should even true if we use IPC::Run to run the process. OTOH, this scenario - running an utility to create a file, then use it - is so common, that I wonder why we are not bitten by this more often. Or is there a clever, safe way to achieve the goal?

        -- 
        Ronald Fischer <ynnor@mm.st>

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1039328]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (4)
As of 2024-04-16 15:39 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found