Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re: General perl question. Multiple servers.

by shmem (Chancellor)
on Oct 06, 2007 at 15:01 UTC ( [id://643092]=note: print w/replies, xml ) Need Help??


in reply to General perl question. Multiple servers.

I'd set up a syslog server and have each process send a UDP packet to this server after process completion.

--shmem

_($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                              /\_¯/(q    /
----------------------------  \__(m.====·.(_("always off the crowd"))."·
");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}
  • Comment on Re: General perl question. Multiple servers.

Replies are listed 'Best First'.
Re^2: General perl question. Multiple servers.
by dsheroh (Monsignor) on Oct 06, 2007 at 15:23 UTC
    A syslog server is a very good (and, IME, very underused) solution.

    Alternately, if that's not sexy enough to get management buy-in, you could instead set the processes up to all log to a central database, but that would mostly be just pointless overhead unless you're using a database already (and may still be pointless overhead even if you are).

Re^2: General perl question. Multiple servers.
by graff (Chancellor) on Oct 06, 2007 at 15:11 UTC
    ++ Much better than my idea below, but there would need to be a reliable way to identify the cases where any of the 150 processes fail before they get to the point of sending their UDP packet to the log server. Not hard to handle, just easy to forget...

    update: On second thought, if the log data from each host is anything more than a single summary report printed at the end of each job, I would still kinda prefer my approach. If the jobs are printing progress reports at intervals, the entries submitted to a central syslog server will tend to be interleaved, and will need to be sorted out. Not a big deal, obviously, but it might be handier to have the stuff "pre-sorted" by harvesting from each machine.

      there would need to be a reliable way to identify the cases where any of the 150 processes fail

      A 'job started' message could be sent by a wrapper which cares about the process and reports its exit status.

      the entries submitted to a central syslog server will tend to be interleaved

      syslog is configurable, and one could send the the log messages to different files based on level/facility and host. Anyways, the log line is marked with the host sending the log message, so sorting things out is as easy as grepping the log file for a host.

      --shmem

      _($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                                    /\_¯/(q    /
      ----------------------------  \__(m.====·.(_("always off the crowd"))."·
      ");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}
Re^2: General perl question. Multiple servers.
by snopal (Pilgrim) on Oct 06, 2007 at 15:33 UTC

    It is interesting, the different backgrounds from which we all come. I'm predominantly used to applying solutions to assets given, while other people come from backgrounds where adding a server here or there is considered trivial.

    It's good to get both perspectives.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://643092]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (7)
As of 2024-04-19 12:53 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found