Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number

Re: How to process with huge data's

by suaveant (Parson)
on Oct 04, 2011 at 14:48 UTC ( #929562=note: print w/replies, xml ) Need Help??

in reply to How to process with huge data's

Well, you are using regular expressions heavily, which is a sure way to slow things down in the long haul, especially if they aren't designed properly. The REAL problem I think however, is the way you maintain the list of connections. You just keep appending to the $connectionText string and then doing multiple lookups/replaces on it. As the string gets longer and longer your regexps will get slower and slower. 700,000 short lines should not even really take hours to process unless on a real slow machine.

My first inclination would be to use a hash to store the connection data, though if you run into memory issues you may have to move to berkeleydb or sqlite or some other lookup tool that uses disk storage.

I ran the following perl -e 'for(1..700_000){$f{$_} = [1,2]} print "Done\n";sleep 10;' and only saw 170MB of RAM usage, so I'd guess you'd be fine using a hash.

Whatever you do, rethink your regexp use, you are using a sledgehammer to put in a screw.

                - Ant
                - Some of my best work - (1 2 3)

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://929562]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (4)
As of 2022-08-16 09:43 GMT
Find Nodes?
    Voting Booth?

    No recent polls found