http://qs321.pair.com?node_id=405907

Intro

Here's a description of at least one of the ways we are using perl at the company I work for. Hope it's an interesting read.

Description

A few years ago I lead a group of developers that wrote an event dispatch mechanism in C++ using ACE. It works on Windows, Solaris, and Linux. This server acts as central point for message received on various different transports: HTTP, TCP, UDP, Serial, 3rd Party IPC, etc. For each transport, messages can be received in various different grammars: HTML, XML, Binary, Label-Data pairs, in-house, etc. The goal was to produce a server framework that would be capable of understanding the combinations of transports and syntaxes. We also wanted the customer (server developers) be able to plug and play transports and grammars as they saw fit.

What do we do with all these messages? As each message is received it is sent through a dispatching mechanism that maps the message to a function that is written to handle that message. Handlers have a simple signature, they receive a single parameter that is the message that triggered them. Handlers can be written in a C/C++ library that is linked in with the server. Alternatively, the server can create an instance of the Perl interpreter and Perl handlers can be loaded into the instance at startup. The server will invoke handlers that are registered, Once handlers are registered with the dispatcher, be they C/C++ or Perl handlers, the server will invoke the handler that is registered for a message based on some key data in the message. For example, a bridge server that receives a binary message on a serial port tweaks the data and retransmits it over UDP in an XML syntax.

For the Perl case, the state of the server is held in a Perl instance that can be loaded at runtime. A global variable can be declared to keep the state of the server. A handler can assign into that variable and another handler can access that variable. Sounds simple enough.. For example, a connected database (DBI of course!) handle can be stored in the global hash and handlers can access that db handle to perform database operations. Having a connection all logged in and ready to go can save tons of time between handler calls. I know this is a problem with typical CGI servers where the connection has to be re-established each time a particular URL is accessed. (Although I haven't looked, I believe this is one of the advantages of mod_perl.)

Handlers executed using a 'run to completion' model. This means that only one handler can run at a time. Any messages that are received while a handler is running are queued and handled in the order that they were received. 'Run To Completion' (RTC) has been a much debated features of the server. The primary advantage of a RTC model is that each handler can assume that no other conflicting handlers will fire during its execution. Since only one handler is fired at a time, there is no confusion about the potential sequence of events that will occur. The second significant advantage of the RTC model is that debugging can be performed on any handler and messages received during a debugging session will be queued by the server until the debugger releases control. On the other hand, A handler that has a considerable amount of code or calls synchronous APIs will result in a slow running handler. Slow handlers block the execution of other, possibly higher priority, handlers. It is possible for a single server instance to become bogged-down and unresponsive if the queue of pending messages is considerable and each handler takes a while to complete.

On the topic of debugging, we are able to pass parameters to the Perl instance on server start up and enable the perl debugging mode (Equivalent to setting -d on the command line.) Combine this with a $DB::single=1 in the handler you're trying to debug and Voila! The Perl debugger prompt shows up and you can step through a handler inspecting all the nitty gritties. When you have finished debugging a handler, just continue and the server will run through the queued up messages.

We write quite a few servers and this tool has enabled us to write them much faster than before. We have experienced a few Socio/Political problems along the way. 1) Quality in-house Perl Packages, 2) Packages used in long running servers, & 3) Spooked C++ developers.

  1. Quality in-house Perl packages
  2. It's very difficult to get reliable, high quality Perl Packages from developers who just think that Perl is a just a good way to search through log files. The documentation for XS is pretty good, however you really have to immerse yourself in it for a while to become proficient. I also have the distinct impression that they would rather be working on other 'Shiny' new languages and frameworks (C# & .Net).

  3. Packages used in long running servers
  4. Not to name any names, but a we have had memory allocation issues with a particular DBD package. The memory leak was slow and it only appeared when disconnecting from data sources. It seems as though most people using Perl and testing Perl packages don't have them running continuously 24x7. We reported the error and supplied information to get it fixed, but there wasn't much interest in fixing it because the suggested remedy was to restart the app. (Which we could only do at specific times.)

  5. Spooked C++ developers
  6. It's amazing how annoyed C++ developers get when they realize that their turf has been invaded by Perl upstarts. Sure they might be able to write a faster server, but it'll take 'em twice as long to do it. (Perhaps 3 times as long if you count memory allocation issues.)

Conclusions

Overall we have had great success with Perl for developing 'soft realtime' servers. The development has been considerably faster than I have seen in the past developing C++ servers. We have gained a considerable bit of functionalty from using the Packages that are available in CPAN. All Hail Larry Wall!


"Look, Shiny Things!" is not a better business strategy than compatibility and reuse.


OSUnderdog