http://qs321.pair.com?node_id=381156

Asgaroth has asked for the wisdom of the Perl Monks concerning the following question:

Hi Monks,

I would appreciate some of the infinite wisdom that is at the Monestry Gates.

Firstly, is it at all possible to use a combination of the Thread::Conveyor module and the Parallel:ForkManager module? I am assuming that this is indeed possible.

Secondly, how would I use this in a thread "consumer" subroutine?

Basically I have a thread that is constantly being populated with locations of filenames. These files then need to be compressed. The thread "consumer" is working just fine, it compresses all these files, however, I need to perform multiple compressions at once, i am hoping to achieve this by implementing the Parallel::ForkManager module within the "consumer" thread.

The code for the subrouting follows:

sub compress_logs() { $compress_process_manager = new Parallel::ForkManager(4) ; while( my $filename = $archive_queue->take ) { if ( defined ($filename) ) { my $pid = $compress_process_manager->start and next ; next unless -e $filename; next unless -e $filename . ".sum"; next if -e $filename . ".gz"; $message_queue->put(RUN_LOG, "Recieved $filename For Compr +ession"); $message_queue->put(RUN_LOG, "Begining Compression Of $fil +ename"); $message_queue->put(RUN_LOG, "Reading $filename Into Memor +y"); my $string = ''; open(FH, "<$filename") or die "Could not open $filename ($ +{OS_ERROR})"; binmode(FH); while(<FH>) { $string .= ${ARG} }; close(FH); $message_queue->put(RUN_LOG, "Completed Reading $filename +Into Memory"); $message_queue->put(RUN_LOG, "Compressing $filename Memory + Image"); my $dest = Compress::Zlib::memGzip($string) ; $message_queue->put(RUN_LOG, "Completed Compressing $filen +ame Memory Image"); $message_queue->put(RUN_LOG, "Flushing $filename Memory Im +age To Disk"); open(FH, ">$filename.gz") or die "Could not open $filename +.gz (${OS_ERROR})"; binmode(FH); print FH $dest; close(FH); undef($string); $message_queue->put(RUN_LOG, "Completed Flushing $filename + Memory Image To Disk"); $message_queue->put(RUN_LOG, "Removing $filename After Com +pression"); unlink($filename); $message_queue->put(RUN_LOG, "Completed Removing $filename + After Compression"); $message_queue->put(RUN_LOG, "Removing $filename.sum After + Compression"); unlink($filename . ".sum"); $message_queue->put(RUN_LOG, "Completed Removing $filename +.sum After Compression"); $message_queue->put(RUN_LOG, "Completed Compression Of $fi +lename"); $compress_process_manager->finish ; }; }; };


If I comment out all the Parallel::ForkManager related statements, this subroutine works as it should, if I have them in as posted above then it does not appear to compress anything.

Am I missing something obvious here, or is there some fundamental design flaw in the above code?

Dont worry about the reading od the datafiles into memory then flushing to disk, the system that this is running on is more than capable of handling the memory requirements. However, suggestions would be appreciated on how to improve the above subroutine. Most of it is just queueing to another thread which writes logs to a file.

Your help in this would be *greatly* appreciated.

Thanks
Asgaroth