Come for the quick hacks, stay for the epiphanies. | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Given all what you've said so far, especially that it seems you can't never be sure you have collected all the answers for a given query, I think I would probably go for a completely different approach.
I would use the OS's sort utility to reorganize the input file, sorting on the id number (second field). I would then read all the records for a given id number (storing them in an array or a hash), collect the information from the query record and use it to process the answer records. Once I've finished processing an id number, clear the data structures and start again with the next if number lines. This way, the memory usage of your Perl program will be limited to the maximum number of lines there can be for one id number. (Of course, the sort phase will use a lot of memory, but the *nix sort utilities know well how to handle that, they write temporary data on disk to avoid memory overflow.) Sorting your large file will take quite a bit of time, but at least you're guaranteed never to exceed your system's available memory. An alternative would be to use a database, but I doubt it would be faster. In reply to Re: Memory utilization and hashes
by Laurent_R
|
|