more useful options | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
But it seems to be taking about half an hour to do the initial processing. Is there a faster way to do it?
A quick back-of-the-envelope: 30 minutes to load ~160,000 records is roughly 90 records/second. That seems pretty slow. Have you tried instrumenting the code to take some timings? If you dumped a timestamp (or a delta) every 1K records, you might see an interesting slowdown pattern. Correlating this with a trace of your systems memory availability might show what memory is an issue, particularly if the system starts swapping at some point during the load. Can you say more about the form of the keys and values? There might be something about their nature that you could exploit to find a different data structure.
In reply to Re: Slurping BIG files into Hashes
by dws
|
|