http://qs321.pair.com?node_id=1202113


in reply to Batch remove URLs

G'day bobafifi,

"I know how to remove individual URLs from the pages using a find/replace one liner, but doing them all in one pass has so far eluded me."

If you'd posted the part that you know, we could suggest how to extend that. Here's an example one-liner to change multiple lines in multiple files:

$ cat ABC A old A B old B C old C
$ cat DEF D old D E old E F old F
$ perl -pi -e 's/old/new/' ABC DEF
$ cat ABC A new A B new B C new C
$ cat DEF D new D E new E F new F

See perlrun for information on the -i and -p switches that I used.

— Ken

Replies are listed 'Best First'.
Re^2: Batch remove "404 Not Found" URLs
by bobafifi (Beadle) on Oct 27, 2017 at 06:21 UTC
    Thanks Ken!
    Here's what I've been using:
    find . -type f -name "*.htm" -print|xargs perl -i -pe 's/http:\/\/example\.com\/[404 Not Found]/g'

    I'm afraid I haven't described what I'm trying to accomplish very well, sorry.
    1.) I have a list of 300 URLs
    2.) I have a folder on my desktop with 100 .htm pages
    3.) I want to run that list against those 100 pages and remove URLs
    4.) This will leave the <a href tags in place with the text [404 Not Found] (instead of the URL - for example, <a href="[404 Not Found]">[404 Not Found]</a>).

    My plan then (since some of her links have descriptive text and others link text), was/is to render those dummy tags in the HTML inactive by doing another find/replace and leaving just <a>[404 Not Found]</a> to display 404 Not Found or the link's descriptive text in the browser.

    Thanks again Ken - I'll check out the perlrun link

      Assuming that you just want to get the job done and are not pursuing this as an academic exercise, I would abandon the one-liner approach. It can be done that way, but the more you throw into it the messier it gets. Here's one plan:

      1. Store your 300 URLs in a file, one per line (if you haven't already done so). You can then slurp this into an array at the start of your script.
      2. Loop over the files with a simple glob
      3. Inside that loop over all the URLs
      4. Inside the inner loop, call a subroutine with the filename and the URL to replace

      You can now test the inner subroutine in isolation on a test file to your heart's content to get it perfectly right without destroying the initial content. Consider quotemeta for the search terms. If you get stuck with that approach, come back with specific questions, ideally as an SSCCE. Good luck.

      "Here's the what I've been using ... 's/s/http://example.com/[404 Not Found]/g'"

      I doubt it. That won't even compile:

      $ perl -MO=Deparse -e 's/s/http://example.com/[404 Not Found]/g' Bareword found where operator expected at -e line 1, near "404 Not" (Missing operator before Not?) syntax error at -e line 1, near "404 Not Found" -e had compilation errors.

      Even assuming the initial "s/s/" was a typo, and should have been just "s/"; it still doesn't compile:

      $ perl -MO=Deparse -e 's/http://example.com/[404 Not Found]/g' Bareword found where operator expected at -e line 1, near "404 Not" (Missing operator before Not?) Regexp modifiers "/a" and "/l" are mutually exclusive at -e line 1, at + end of line syntax error at -e line 1, near "404 Not Found" -e had compilation errors.

      Perhaps you meant something closer to this:

      $ perl -MO=Deparse -e 's{http://example.com}{[404 Not Found]}g' s[http://example.com][[404 Not Found]]g; -e syntax OK

      You really need to copy and paste verbatim code. Typing by hand, or making guesses, is extremely error-prone; we can only respond to what you posted (not something different, that was maybe intended, but not actually written). Unfortunately, when one such problem is found, it raises the question of whether other parts are not true representations of the real code, data, output, and so on.

      While you probably could still do this with a one-liner; it's getting a bit complicated for that and I'd recommend a script. For a simple text substitution, a regex is probably fine; if it's actually more complex than your post suggests, you should find an alternative tool (see "Parsing HTML/XML with Regular Expressions" for a whole raft of options).

      You talk about doing this in two passes; that seems wasteful to me and one pass is easy anyway. You say you want to end up with "<a>[404 Not Found]</a>"; use whatever you want but, in the code below, I've used "<span class="bad-url">[404 Not Found]</span>": that will render as plain text as it is, but allows you to highlight it with CSS if you so desire.

      In the code below I've used Inline::Files purely for demonstration purposes. I'm assuming you're familiar with open. You can presumably get your list of HTML files with "*.htm" on the command line (the find and xargs seems overkill to me, but maybe you have a reason); using glob, within your script, is another option; there's also readdir; and there are many modules you could also use. I've also assumed that your "list of 300 URLs" is also in a file somewhere; however, it's far from clear if that's actually the case.

      In the code below, the technique I'm demonstrating involves creating a hash from your list of URLs once, then substituting links which match one of those URLs. Do note that your post suggests that the href value is the same as the <a> tag content: my code reflects that; modify if necessary.

      #!/usr/bin/env perl -l use strict; use warnings; use Inline::Files; my %bad_url; while (<URLLIST>) { chomp; ++$bad_url{$_}; } my $re = qr{(?x: ( # capture entire element to \$1 <a # match start of 'a' start tag \s+ # match whitespace after element name href=" # match start of href attribute ( # capture href value to \$2 [^"]+ # match anything that isn't a " ) # end \$2 capture " # match closing " \s* # match optional whitespace > # match end of 'a' start tag \s* # match optional whitespace \g2 # match href value (captured in \$2) \s* # match optional whitespace </a> # match 'a' end tag ) # end \$1 capture )}; my $replace = '<span class="bad-url">[404 Not Found]</span>'; for my $fh (\*HTM1, \*HTM2) { my $html = do { local $/; <$fh> }; print '*** ORIGINAL ***'; print $html; $html =~ s/$re/exists $bad_url{$2} ? $replace : $1/eg; print '*** MODIFIED ***'; print $html; } __URLLIST__ http://bad1.com/ http://bad2.com/ http://bad3.com/ http://bad4.com/ __HTM1__ <h1>HTM1</h1> <a href="http://bad1.com/">http://bad1.com/</a> <a href="http://good.com/">http://good.com/</a> <a href="http://bad2.com/">http://bad2.com/</a> __HTM2__ <h1>HTM2</h1> <a href="http://good.com/">http://good.com/</a> <a href="http://bad2.com/"> http://bad2.com/ </a> <a href="http://good.com/"> http://good.com/ </a> <a href="http://bad3.com/" >http://bad3.com/</a> <a href="http://bad4.com/">http://bad3.com/</a> <a href="http://bad4.com/">http://bad4.com/</a>

      Output:

      *** ORIGINAL *** <h1>HTM1</h1> <a href="http://bad1.com/">http://bad1.com/</a> <a href="http://good.com/">http://good.com/</a> <a href="http://bad2.com/">http://bad2.com/</a> *** MODIFIED *** <h1>HTM1</h1> <span class="bad-url">[404 Not Found]</span> <a href="http://good.com/">http://good.com/</a> <span class="bad-url">[404 Not Found]</span> *** ORIGINAL *** <h1>HTM2</h1> <a href="http://good.com/">http://good.com/</a> <a href="http://bad2.com/"> http://bad2.com/ </a> <a href="http://good.com/"> http://good.com/ </a> <a href="http://bad3.com/" >http://bad3.com/</a> <a href="http://bad4.com/">http://bad3.com/</a> <a href="http://bad4.com/">http://bad4.com/</a> *** MODIFIED *** <h1>HTM2</h1> <a href="http://good.com/">http://good.com/</a> <span class="bad-url">[404 Not Found]</span> <a href="http://good.com/"> http://good.com/ </a> <span class="bad-url">[404 Not Found]</span> <a href="http://bad4.com/">http://bad3.com/</a> <span class="bad-url">[404 Not Found]</span>

      — Ken

        Thank you Ken! My apologies for the initial typos in the one-liner, it's been awhile since I've used this Perlmonks interface. Good suggestion on the span tags and CSS, I hadn't thought about that as I was really more focused on simply getting the text 404 Not Found to not be hyperlinked. I'll check out your script. Thanks again!