Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Re^3: Unwritten Perl Books: 2007 version

by samizdat (Vicar)
on May 17, 2007 at 20:11 UTC ( [id://616103]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Unwritten Perl Books: 2007 version
in thread Unwritten Perl Books: 2007 version

Sure, lin0 -

We had a boatload of data parsed from ASCII log files of chip testers. Each test had its own syntax for responses and some were pass-fail, others were binned. I had a bunch of log parsers (straight Perl scripts, although with the parsing driven by files of regexen and hints for each test), and they stuffed a big MySQL database.

The overall app/tool structure was that I used cron jobs to seek out new log files and parse them into the (MySql 4) DB. All the tables were very simple (read, FLAT), no special joins or exotic relationalism. My (Apache 1.3.33) web pages were built with Embperl (1.6, not 2), which did the data presentation. GD-based modules were used (GD::Graph::bars3d, IIRC) to generate pix on the fly. These were dumped into a temporary directory which was housecleaned of all files older than 3 hours. Another option was to build spreadsheets (Spreadsheet::WriteExcel::Big) or formatted RTF's (RTF::Writer, IIRC) from the data. Likewise, these were disposable.

All the web pages were dynamic Embperl constructs, and they used the directory structure to allow me to use the same pages for many different test sets. Embperl's really good at that, because the environment persists as you go deeper into the directory hierarchy unless you overwrite it locally. Very trick. Embperl also has a lot of automatic HTML table-generation functionality which just blows PHP away. Dunno about the other web-ready Perl templating systems, but I was able to do a lot with a tiny bit of Embperl coding.

We didn't have a lot of traffic, but we did have a pretty hefty storehouse of data. Even so, the queries returned full pages (with graphics and color-coded data tables) in just a few seconds. I could have split out the data into separate data servers, but there was no real need. Truth is, Open Source tools are plenty fast enough for most applications without needing threading, multi-processing, or multiple machines.

Don Wilde
"There's more than one level to any answer."
  • Comment on Re^3: Unwritten Perl Books: 2007 version

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://616103]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (6)
As of 2024-04-18 11:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found