http://qs321.pair.com?node_id=1113706


in reply to Ignoring not well-formed (invalid token) errors

Do you know if the bad XML is always of the same structure so that its removal can be automated? Part of your problem is that XML parsers are supposed to die horribly if they encounter badly formed XML. There are XML-ish things out there that have their own parsers as a result of this.

XML::Twig violates the "die on bad xml" rule by offering calls that at least return from a failure and give you the error message rather than dieing, so that you might be able to recover from the failure with an automated fix: e.g.

if (!safeparse($my_stuff)){handle_errors();}

where handle_errors() checks the message and then runs some sort or preprocessor to remove the offending lines, then calls safeparse() again. It's a bit of a pain because it means you have to re-run all the stuff that you successfully parsed, but it's better than nothing.

You might also experiment with using one of the HTML parsers to extract what you want. They're not likely to be as good with enormous files they should be more tolerant of bad behavior.

And if you have a way to contact whoever is generating the files, you might point out that some of them are badly formed and that they might have a bug in their xml generator. If anyone else is using the files, they're probably running into similar problems.

Replies are listed 'Best First'.
Re^2: Ignoring not well-formed (invalid token) errors
by Krambambuli (Curate) on Jan 19, 2015 at 11:49 UTC
    If the errors you're seeing fit on single lines and follow some patterns - like in the example you've shown - it might suffice to filter the bad file through something simple like say
    perl -ne 'print $_ if not m/^<es\s*$/' <bad_input.file >corrected_inpu +t.file
    That 'ignores' the fact that there's anything about XML and as such could be fast enough to be usable, even if it is an extra step.

    Krambambuli

      If you do it like:

      perl -ne'print $_ if ! /^<es\s*$/' huge.xml | perl extract.pl -

      Then you don't even have to wait for the huge file to be read twice. The time required could well be almost the same as it would be without the filter. Since the filter code likely can run faster than the XML parsing code, the difference in run-time could just be the insignificant time it takes to filter one buffer's worth of XML. It would likely take a bit more CPU (probably less than 2x), but I doubt processing a huge XML file is usually CPU-bound on most systems.

      Though, brettski didn't seem to find even that proposal acceptable when I proposed it in chat around the time that the root node was posted.

      - tye        

        Tye, I must have not understood your proposal in the chatterbox. My apologies. This is good and I could use it on a case by case basis. I don't know 100% that I will always see the same line but I can substitute a different regex if necessary. I'm hoping this is just a rare occurrence and I will know more as I work with these files on a more regular basis.

        I was hoping there was some kind of exception handling that could catch this kind of error and move on. Though I know this goes against the XML doctrine on poorly formed XML