http://qs321.pair.com?node_id=1139849


in reply to Re: Parsing of undecoded UTF-8 will give garbage when decoding entities
in thread Parsing of undecoded UTF-8 will give garbage when decoding entities

Honestly, from what I've seen no. The only 'extended' characters are a few 'smart' apostrophes and a copyright symbol. I determined this with bbedit by opening the raw file, and switching the encoding to latin1 from the utf-8. each page is pretty much identical, but that's a good point about the headers. I will investigate that next. It's so hard figuring this kind issue as it's the module barking and our code!

Thanks for the input so far!

Replies are listed 'Best First'.
Re^3: Parsing of undecoded UTF-8 will give garbage when decoding entities
by aitap (Curate) on Aug 26, 2015 at 11:44 UTC

    Only ASCII characters (with ord <= 0x7f) are represented in UTF-8 in the same way as in latin1 (as single bytes). By the way, there is a module IO::HTML which can be used to determine encoding of HTML files (seekable :raw streams only).

    If you are positive that your web pages consist only of ASCII and valid UTF8, you can use HTML::TokeParser::->new( \ decode "UTF-8", $raw_html ); (or even utf8::decode($html); HTML::TokeParser::->new($html)), but it's going to complain and/or produce mojibake (or at least U+FFFD REPLACEMENT CHARACTERs) if (when?) the crawler encounters latin1/cp1252/koi8/another non-ASCII encoding.