http://qs321.pair.com?node_id=714818

suaveant has asked for the wisdom of the Perl Monks concerning the following question:

I have a report writer that was designed to fork for better speed, it takes chunks of the report and runs them in parallel, writing the results to a file which is later sorted. When we were on Solaris we had no known issues with this, but when we moved to Linux all of a sudden we started getting inconsistencies in the data. Basically lines were getting cut off, lost, etc... it would seem the children were stomping on each other's toes.

I have been hacking at this for half a day and nothing is working. I build chunks of data, all the lines for my child's output, flock the output file and write the chunk to it. Originally I use open and print with flock, then I tried manually flushing the file handle (which apparently flock already does anyway), I tried converting everything to sysopen and syswrite. I made sure LOCK_EX and LOCK_UN were defined... I just can't figure out what is going on. I though flock was fine as long as everything was using it....

Here is the code snippet I am currently using to no avail:

flock( $rpt, LOCK_EX ); # seek($rpt, 0, 2); # found this in PerlIO, didn't help. sysseek broke + everything for some reason syswrite($rpt,$repout,length($repout)); flock( $rpt, LOCK_UN );
Any ideas? Or even better ways to write data to the file that is fork-safe?

                - Ant
                - Some of my best work - (1 2 3)