No such thing as a small change | |
PerlMonks |
Re: How to optimize a regex on a large file read line by line ?by graff (Chancellor) |
on Apr 16, 2016 at 16:29 UTC ( [id://1160654]=note: print w/replies, xml ) | Need Help?? |
Not that this would make a big difference in terms of run-time, but you don't have to keep your own counter for the number of lines in the file. The predefined global variable $. does that for you (cf. the perlvar man page):
A few other observations... I fetched the "10-million-combos.txt.zip" file you cited in one of the replies above, and noticed that it contains just the one text file. In terms of benchmarking, you might find that a command-line operation like this: is likely to be faster than having the perl script read an uncompressed version of the file from disk, because piping output from "unzip -p" involves fetching just 23 MB from disk, as opposed to 112 MB to read the uncompressed version. (Disk access time is always a factor for stuff like this.) Spoiler alert: your file "10-million-combos.txt" does not contain any lines that match /123456$/. UPDATE: actually, there would be 2 matches on a windows system, and I find those two on my machine if I search for /123456\r\n$/. I was going to suggest using the gnu/*n*x "grep" command-line utility to get a performance baseline, assuming that this would be the fastest possible way to do your regex search-and-count, but then I tried it out on your actual data and got a surprise (running on a macbook pro, osx 10.10.5, 2.2GHz intel core i7, 4GB ram): I ran each command three times in rapid succession, to check for timing differences due to system cache behavior and other unrelated variables. Perl is consistently faster by about 33% (and can report total line count along with match count, which the grep utility cannot do). (If I remove the "$" from the regex, looking for 123456 anywhere on any line, I find three matches, and the run times are just a few percent longer overall.)
In Section
Seekers of Perl Wisdom
|
|