Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation

Re^3: Matching lines in 2+ GB logfiles.

by bluto (Curate)
on May 01, 2008 at 19:07 UTC ( #683990=note: print w/replies, xml ) Need Help??

in reply to Re^2: Matching lines in 2+ GB logfiles.
in thread Matching lines in 2+ GB logfiles.

If you generally know what you are looking for ahead of time, one method is to keep a process always running that tails a log file. This process can then send everything it finds to another file, which can be searched instead.

If you need to beat grep, you can, but you have to do things that grep can't. This includes knowing how the files are laid out on disk (esp RAID), and how many CPUs you can take advantage of (i.e. lower transparency to raise performance). You can write a multithreaded (or multiprocess) script that will read through the file at specific offsets in parallel. This may require lots of tweaking though (e.g. performance depends on how the filesystem prefetches data, and what the optimum read size is for your RAID). FWIW, you may want to look around for a multithreaded grep.

  • Comment on Re^3: Matching lines in 2+ GB logfiles.

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://683990]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (3)
As of 2021-10-17 16:37 GMT
Find Nodes?
    Voting Booth?
    My first memorable Perl project was:

    Results (72 votes). Check out past polls.