http://qs321.pair.com?node_id=243725

Tim Bray, one of the inventors of XML, has an interesting weblog post exploring the idea that XML is too hard for programmers.

one of his main points is that programmers seem to be stuck either using an inefficient approach that involves parsing the entire document and keeping it in memory (DOM), or writing code in an awkward callback style (XML::Parser,SAX,etc) that doesn't mesh well with the programming language being used.

he also makes a good case for why having a language that is specifically designed to work with XML isn't as good an idea as it sounds.

the real bombshell of the piece is that he uses regexps to do most of his XML parsing. here at the monestary, whenever a young monk posts code that uses regexps to parse XML, we admonish them and dutifully point them at some of the more robust XML modules. for good reason. there is a world of difference between the inventor of XML, who has been writing Perl since 1993 using regexps for convenience and a green newbie using regexps because they aren't aware of the gotchas or that better modules exist.

still, when the inventor of XML suggests that the existing approaches are too complicated, maybe we ought to pause for a moment to think about that.

for most of what i do with XML, i'm only dealing with small documents and performance isn't critical. in these situations, XML::Simple and XML::Twig make things easy and painless. i've only had to do callback or stream based parsing a couple times and, while i didn't find it that hard, i can see how it would be difficult to deal with in more complex applications.

i've been playing with HTML::TokeParser lately for scraping websites. i've found it to be a very intuitive and powerful interface. perhaps something similar could be done for XML. i see no reason that it couldn't be implemented with a stream-based backend keeping it efficient for parsing large documents.

what else can we think of to make working with XML less painful?

anders pearson