Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

(Ovid) Re: Writing to a file

by Ovid (Cardinal)
on Aug 20, 2001 at 23:35 UTC ( [id://106333]=note: print w/replies, xml ) Need Help??


in reply to Writing to a file

Well, it's tough to know exactly how to do that since we don't know much about the scripts and how they're writing to the file, but how about having them write to separate files and then cat them together when you're done?

If you time and date stamp the log entries, you could write a perl program to sort and combine them for you.

Cheers,
Ovid

Vote for paco!

Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.

Replies are listed 'Best First'.
Re: (Ovid) Re: Writing to a file
by jalebie (Acolyte) on Aug 20, 2001 at 23:50 UTC
    The problem with all your solutions is that want me to write to multiple files and combine them the only problem with that is that myscript.prl is actually being called locally on different wkstations when by
    system("rshto $wks myscript.prl >> $tmp_file");
    and we have over 200,000 wkstations here were the script is supposed to run. I thought about writing to different files too, but the sheer number of temp log files generated make this impractical, and the extra code to put these files back by date/time stamp and then unlink("$tmp_file") is also needed. I was wondering if there is a way in perl to know if the file is being written too currently, and if its is being wriiten to wait until no other process is writing to it.
      Within "myscript.prl" instead of just printing and capturing STDOUT to $tmp_file, open it instead and write to it. You will want to checkout flock which may help prevent the overwriting problem. Still, if you have hundreds of thousands of processes/machines all trying to write to the same file, you are creating a huge bottleneck. What about running the command on each machine as you appear to want to do, but write it to a local temporary accumulation file. Then either retrieve each one, or send them to a common queue (on a periodic basis) where a second process can collate them into this one behemoth file you desire? Just a thought.

      -THRAK
      www.polarlava.com
      There is flock, which would lock the file. But, each process has to request what the flock status is and I'm not very conversant on how that works.

      Now, what you're saying is that you're going to run this script on separate workstations. Why not just run it, store the logfile locally, then have another script which gathers together all the data?

      ------
      /me wants to be the brightest bulb in the chandelier!

      Vote paco for President!

        Something in the back of my mind is scratching me and telling me there might be problems with using flock over NFS (I assume you're using NFS if all these workstations are writing to the same file?)

        Another potential approach? how about saving files as:

        localtime().hostname.extension?

        That way, you can sort the files date/time order simply by doing an ls -l.

        Note - this would assume that all the machines are synchronized to the same clock.

      Couldn't you utilize the users home directories for a location for the temp file, and then run one script to comb the homedirs and conglomerate them all into a master file?

      -OzzyOsbourne

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://106333]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others goofing around in the Monastery: (1)
As of 2024-04-25 04:14 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found