http://qs321.pair.com?node_id=162011


in reply to forking in winblows ?

I'm not understanding the point of forking ten processes to all do a dir command on the same directory in parallel. It seems like a better move would be to send all of the threads after different resources. This leads into your problem. Windows is trying to protect it's filesystem by locking access to the directory. If you run your script enough, sometimes you get a directory listing and other times you don't. Each process is trying to run dir but entering into a deadlock situation with each other.

The only reason I can see this running on Unix is the multiuser nature of Unix. Are you running this on a FAT or FAT32 partition? These were only designed for a single user system. Thus they don't have options for reading data without locking it. I don't have access to a Windows NT box but I suspect that if you try it on a NTFS system it will work.

Replies are listed 'Best First'.
Re: Re: forking in winblows ?
by Rex(Wrecks) (Curate) on Apr 25, 2002 at 16:59 UTC
    Actually your comments on FAT and FAT32 are not quite true, especially when it comes to the dir command. They do have options for reading data without locking, it's WRITING that requires a lock.

    And I seriously doubt that the dir command is the issue here, since system() is being used dir would behave as normal and just wait to access any locked data until its (actually the redirectors) internal timeout gets kicked, and then barf a semephore timeout error.

    I ran into the exact same issue trying to fork in Windows, and if you want, change the 'dir' to an 'echo' command, chances are you will hit the same thing.

    "Nothing is sure but death and taxes" I say combine the two and its death to all taxes!
Re: Re: forking in winblows ?
by arkamedis21 (Acolyte) on Apr 25, 2002 at 16:31 UTC
    The dir command is just a simplified example on what is causing the problem in my larger script, and I don't think this has to do anything with the file system because if i was to replace:

    my @output = `$dir_cmd`;

    with a system call

    system("$dir_cmd");

    it works with even a 100 process's in parallel, the reason I son't want to use a system call is because I want to get the output and store it within a file

    I even looked trying something like this:

    system ("$dir_cmd >> out.txt");

    but the output comes all mangled up, with so many process's trying to write to the same file at once, so I just store the output in a variable, and then copy it to the file within the smae child process, and it works for me in UNIX.

      So why not put the commands into a batch file and pipe the output of the commands to a file?
      test.bat -------- dir c:\some\path\ >>c:\path\to\test.txt
      then use the system call and read the output in perl:
      system("c:\path\to\test.bat"); open(OUTPUT, "<c:/path/to/test.txt") || die "couln't open ...yadda..." +; @output=<OUTPUT>; close(OUTPUT);

      Just my $0.02.

      Matthew Musgrove
      Who says that programmers can't work in the Marketing Department?
      Or is that who says that Marketing people can't program?
      I even looked trying something like this:
      system ("$dir_cmd >> out.txt");
      but the output comes all mangled up, with so many process's trying to write to the same file

      Did you try something like this:

      system( "$dir_cmd > out$i.txt" );
      so that each thread would write to a different output file?

      Or, since you just want to read output from commands that are running in subshells, have you looked at creating an array of file handles that open "$dir_cmd |"? I haven't tried it, but I expect it would be possible (maybe even really simple) to loop over that array of handles doing non-blocking reads until their all done. (But I'm not running my windows partition just now, so I'd have to try it some other time...)