good chemistry is complicated, and a little bit messy -LW |
|
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Hello, monks. I've got a large number of text files (thousands to millions at a time, up to a couple of MB each) which I need to make a lot substitutions to (more than 150 to each). For quite some time I've been using a bash script which takes around 1 minute / 2000 files but, having used Perl in the long past, I decided to rewrite the script in Perl, hoping it would improve things considerably. However, using the standard (for my poor Perl skills at least) method of "open file; slurp it; loop through 150 substitutions" proved abyssmaly slower than bash/sed. Splitting the input down to 1 word at a time sped things up, but still to 60-70% slower than bash. Combining the regexes into one large sequence (s/^[0-9].*\s//m|s/\S*?talk\S*\s/ talk /gi...) didn't help either, as the interpeter probably optimizes them anyway. So, the problem is twofold: 1. Speed. For context, most substitutions turn gerunds and past tenses of select verbs into infinitives, trim out numbers or convert plural to singular... nothing too fancy, no backreferences or grouping. 2. I need to change the regex list often and a long string as shown above is hard to maintain. Ideally, I want to use a here-doc to list my substitutions, but I can't find a way to tell Perl how to use the resulting string in both the match and substitution parts of s///. If all else fails, I can split the regex into match/sub pairs as a workaround but I'm pretty sure there's a more elegant way to do it. I'd appreciate your wisdom on the matters, the snippet is to show how I'd prefer #2 to be implemented. Thank you.
In reply to Need to speed up many regex substitutions and somehow make them a here-doc list by xnous
|
|