I have a web site with lots of perl code that does various common things such as generate html using a templating system and append user-supplied data to text files. I've been doing this the obvious way, for example a typical script may:
open a template file, read from the file, close the file, open a data file, read from the file, append to the file, close the file.
While going through a general code rewrite, I thought of setting up a file client/server based approach in which I create a server that reads the data and template files upon startup then binds to a port. The web code then acts as a client, requesting data or sending updates to the server as the case may be. Thus, template files only need to be read once. It seems that this is better than storing it all in shared Apache memory because then every Apache process doesn't need to know every file for every script/module.
I coded a test client and server using Net::EasyTCP and it works well. Bashing it with apache bench gives a worse median response time but a better mean response time than the original non-client/server version of the code.
My questions are:
- Is this a good idea? (It seems so to me, but am I missing something.)
- Am I re-inventing the wheel?
- Unix sockets vs. TCP? (My attempts at writing unix socket code met with failure and Net::EasyTCP made writing the server and client code very easy, so I just went with that.)