Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re: Further on buffering huge text files

by Anonymous Monk
on Mar 09, 2005 at 10:16 UTC ( [id://437838]=note: print w/replies, xml ) Need Help??


in reply to Further on buffering huge text files

You say you can assume all filters to be known before hand - does that also apply to the file? If not, you still can't avoid reading in all the lines and applying a regexp to the lines.

This problems seems to shout "database, database, database". But that's probably not going to help you much if the files are very viotile.

  • Comment on Re: Further on buffering huge text files

Replies are listed 'Best First'.
Re^2: Further on buffering huge text files
by spurperl (Priest) on Mar 09, 2005 at 11:47 UTC
    Yes, the filters are known. But the file isn't. The GUI widget should in theory open any huge file, and as fast as possible it should display it and a scroll bar.

    Databases won't cut here. It's a pre-written text file that is generated by an outside tool and the GUI should be able to read any such file. Besides, the file grows with time and the GUI should keep up.

      Can you cache information? If files only grow with time, and your filters are constant, you could record for each line written to the file (or better, for each line read from the file) which filter(s) it matches. Using a bit-vector for each filter, and a single bit per line, you only need 125k of storage per filter for a million line file.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://437838]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (4)
As of 2024-04-25 22:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found