http://qs321.pair.com?node_id=11116575


in reply to Why a regex *really* isn't good enough for HTML and XML, even for "simple" tasks

WWW::Mechanize::Chrome curiously fails the XHTML test, which I've tentatively reported as a Chromium / DevTools bug. The HTML rendering and DOM inspector properly parse the HTML, but the DevTools return "Six" as a node, which isn't really true.

The code also uncovered a bugs/unexpected behaviour in how the link text gets constructed, so I'll upload a fixed version of WWW::Mechanize::Chrome soon.

#!/usr/bin/env perl use warnings; use strict; my $file = shift or die; print "##### WWW::Mechanize::Chrome on $file #####\n"; my $html = do { open my $fh, '<', $file or die "$file: $!"; local $/; +<$fh> }; use Log::Log4perl ':easy'; use WWW::Mechanize::Chrome; Log::Log4perl->easy_init($WARN); my $mech = WWW::Mechanize::Chrome->new( headless => 1); $mech->update_html($html); my @links = $mech->links(); for my $link (grep { $_->url } @links) { print $link->url, "\t", $link->text, "\n"; }

Update: Actually, as the page itself contains "confusing" (to Chrome) information, this is somewhat explainable. The HTML is XML, but it later declares a Content-Type of text/html. Changing that to Content-Type text/xhtml makes (WWW::Mechanize::)Chrome report the correct links.

I still wonder if this parser confusion between DevTools and Javascript could be exploited somehow.

Replies are listed 'Best First'.
Re^2: Why a regex *really* isn't good enough for HTML, even for "simple" tasks
by haukex (Archbishop) on May 08, 2020 at 18:09 UTC
    Actually, as the page itself contains "confusing" (to Chrome) information, this is somewhat explainable. The HTML is XML, but it later declares a Content-Type of text/html. Changing that to Content-Type text/xhtml makes (WWW::Mechanize::)Chrome report the correct links.

    Interesting, thanks! According to several sources on the W3C website, the correct MIME type is application/xhtml+xml, so I've changed that.