Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Ressource dispatcher

by jeepj (Scribe)
on Mar 05, 2008 at 11:31 UTC ( [id://672147]=perlquestion: print w/replies, xml ) Need Help??

jeepj has asked for the wisdom of the Perl Monks concerning the following question:

Hello monks

the situation: I am working for a company using a mainframe system (3270 sessions), and for some massive correction activities, we have a server (Windows) with several IBM personnal communication terminal emulator. These terminals are driven by Perl scripts using the OLE interface (solution equivalent to the one described in Interact with mainframe screen).

The problem is that we want to launch several scripts in parallel, each one using 2 or mores terminals.
We have a limited number of terminals, and to avoid having to select in each scripts which terminals to use, I am thinking about a way to dispatch on demand the terminals to scripts.

My idea is to write a perl script, running as daemon, which task would be to dispatch available session ID on demand, and be informed by scripts when sessions are not used anymore (thus returned to the pool of available sessions).

my questions are:

  • what do you think generally about the solution ? do you have other idea on how to do it ?
  • How the scripts should communicate with the daemon ? I'm thinking about using local sockets.
  • How the daemon should keep the available and unavailable terminals ID ? in memory is easier, but in case of restart of the deamon, the info would be lost...
Thank you in advance for your help, and don't hesitate to help me improve my post/questions, as this is the first time I'm posting on perlmonks.
JeePj

Replies are listed 'Best First'.
Re: Ressource dispatcher
by Corion (Patriarch) on Mar 05, 2008 at 13:20 UTC

    My approach to such interprocess communication is to use a database (SQLite is very lightweight and suitable for this). This alleviates you from all the pesky problems of locking and gives you a convenient tool to query the status of every 3270 session and the job(s) running on it. Also, you can use the queue of jobs as a journal, so you even track which jobs (with their parameters) were run at what time, thus getting persistence.

    Of course, if you plan to have the system pick up where it was on a restart, you will have to put some energy into creating "proper" job ids, so you know which parameters are needed to restart a job that was running but did not finish.

Re: Ressource dispatcher
by pc88mxer (Vicar) on Mar 05, 2008 at 14:14 UTC
    Is automatic reclamation of the session in the event of your perl script abnormally exiting an issue? I.e. if the script using the session dies, do you want the session automatically released?

    If so, consider keeping track of the used sessions with locks on some OS resource - like a file. When the process holding the lock exits, the lock will automatically be released. Otherwise this is very similar to the database approach except that you would query the status of the sessions via the OS with commands like fuser and lsof instead of using SQL.

    Another possibility is to use the advisory locking feature provided for by mysql. It allows you to 'lock' an arbitrary string, and the lock will be released when the database connection is closed.

    I would first explore solutions where you don't have to write your own daemon. This will mean that mean that you'll be putting the session acquisition code in your perl scripts, and scripts which are waiting to run will have to periodically poll the pool of sessions for a free session, but it's easy to implement, and it might work well enough.

      Thank you Corion and pc88mxer, I considered both solutions, and finally decided to go for a file locking mechanism.

      The code is done in a common library, thus easily maintained, and this avoid the writing of a daemon. I am using the flock() function.

      Furthermore, the resource consumption is quite high when 15 or 20 scripts are running in parallel, so avoiding the database approach was a better solution (even if SQLite is quite low on memory and CPU usage).

      The persistence aspect, to know which script is running on a given session, is kept by the library: when a script requires a session, the function giving the lock is also printing in the file the name of the script, and the timestamp.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://672147]
Approved by marto
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others rifling through the Monastery: (3)
As of 2024-04-25 18:50 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found