Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling

Re: Best way to Download and Process a XML file

by dHarry (Abbot)
on Sep 25, 2012 at 12:05 UTC ( #995533=note: print w/replies, xml ) Need Help??

in reply to Best way to Download and Process a XML file

Sanity check: 150GB XML file??? Maybe it's time to rethink the problem?!

Assuming enough disk space and patience option 1 will work.

Option 2 also has its drawbacks, e.g. "finally save it" sounds to me like keeping the file in memory... Or do you want to edit the file "in place"? Anyway, with XML files this big you probably don't want a pure Perl implementation. XML::LibXML jumps to mind. I have happy experience parsing big XML files (10s of GB) Xerces.



  • Comment on Re: Best way to Download and Process a XML file

Replies are listed 'Best First'.
Re^2: Best way to Download and Process a XML file
by Jenda (Abbot) on Sep 25, 2012 at 13:59 UTC

    I do hope you meant XML::LibXML::SAX. The thing is that what's normally meant under XML::LibXML is a DOM style parser, that is something that slurps the whole XML into memory and creates a maze of objects. In case of XML::LibXML the objects reside in the C land so they do not waste as much space as they would if they were plain Perl objects, but still with a huge XML this is not a good candidate. Even if the docs make some sense to you.

    If perl_gog can convince some HTTP library to give him a filehandle from which he can read the decoded data of the response, he could use XML::Rules in the filter mode and print the transformed XML directly into a file with just some buffers and a twig from the XML kept in memory. Of course he'd have to make sure he doesn't add a rule for the root tag as that would force the module to attempt to build a datastructure for the whole document before writing anything! Feeding chunks of the file to XML::Rules is not (yet) supported. Seems it would not be hard to do though, XML::Parser::Expat has support for that.

    Update 2012-09-27: Right, adding the chunk processing support was not hard. I did not release the new version yet as I did not have time to write proper tests for this and one more change but if you are interested you can find the new version in the CPAN RT tracker. The code would then look something like this:

    ... $parser->filter_chunk('', "the_filtered.xml"); $ua->get($url, ':content_cb' => sub { my($data, $response, $protocol) += @_; $parser->filter_chunk($data); return 1 }); $parser->last_chunk();

    Enoch was right!
    Enjoy the last years of Rome.

      I prefer and recommend XML::LibXML::Reader.
      لսႽ ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ

      Of course!, building a tree of 150GB in memory...

      I still think Xerces is the best choice (available in multiple languages). I have parsed files up to 10-ish GB with it and it performed well.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://995533]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (6)
As of 2020-05-30 06:41 GMT
Find Nodes?
    Voting Booth?
    If programming languages were movie genres, Perl would be:

    Results (171 votes). Check out past polls.