Welcome to the Monastery | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
you would certainly want to use the second approach (with find -exec grep -l foo) to reduce your working file set as much as possible. You would certainly not, because you will have to open all files anyway - even if just to check. The difference is that grepping for matches first will make you spawn one process per file as well as require to open the matching files another time (in Perl) to actually process them. You have a (large) net loss that way. Taking that out, and using the -print0 option to avoid some nasty surprises (but not all, unfortunately, due to the darn magic open) leaves us with the following. Note I have removed the continue {} block as it isn't necessary and just costs time.
That should be about as efficient as it gets. If you have a lot of nonmatching files, you might save work by hooking a grep in there - but not with find's -exec. That's what xargs was invented for. Update: s/= \\65536!= "\\n"/; as per runrig's observation. Makeshifts last the longest. In reply to Re^2: Large scale search and replace with perl -i (don't grep(1))
by Aristotle
|
|