Keep It Simple, Stupid | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
I have some text files (several hundred megabytes each) that I am processing. To simplify, I am going through the sections (think of them as paragraphs) and removing lines that are "ref n" (where n is an integer). There will be just a few hundred of these per file.
So I am just reading the whole file into memory, and substituting out the offending lines (I am actually removing the "ref n" lines, and not the lines that begins with n, which I am matching in the first line of code below).
This worked fine (although likely far from the best way to do it), taking about ten minutes or so on average. Then I found out that (rarely - maybe once every dozen files or so) some lines that I need to remove will actually be "foo ref n". No problem, I thought. I just changed to code to:
Something is not working as I expected. :) I am ninety minutes into processing the first file after the code change, and there is no sign of any progress. Why is it taking so long, and how can I improve my algorithm / code? Thank you in advance. In reply to some efficiency, please by Anonymous Monk
|
|