Welcome to the Monastery | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Given that most html files are usually (hopefully) < 1 MB in size, it would make sense to use Aristotle's technique of changing $/, but set it to null and slurp the whole file each time.
If the number of files produced by find is too many for your command line to handle, couldn't you produce a list of directories from find and pass that into perl and then let perl glob those? Something like (NB:completely untested code)
Combining that with Merlyn's trick of backing out the -i effect if nothing is found should save more time. Examine what is said, not who speaks.
1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. 2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible 3) Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke. In reply to Re: Large scale search and replace with perl -i
by BrowserUk
|
|