in reply to Re: slurped scalar map
in thread slurped scalar map
I am already past the "working" phase and in the "optimisation" phase.I'm curious about efficiency in terms of "best programming practise". The program (to large to post) creates a file consisting of N records and an index containing key info like fpos markers at the end. (the records consist of the stdout/stderr of several o/s commands and files => 30-50Mb/server for almost 100 servers)
The program currently reads the index first, then processes & reads each record as it requires it while processing the data-file. I'm trying find a faster solution, i.e. performing larger sequential reads upfront. Of course, it may have extra considerations, such as an max. slurp size.
This exercise will be worth it (in my mind at least) if I can understand the margin by which
<sequential slurp><process><process><process>
operations are faster than
<slurp 1 record><process><slurp next record><process> ...
Hope this makes sense.
Niel
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^3: slurped scalar map
by dragonchild (Archbishop) on Jun 20, 2006 at 17:28 UTC | |
by 0xbeef (Hermit) on Jun 20, 2006 at 19:50 UTC |
In Section
Seekers of Perl Wisdom