http://qs321.pair.com?node_id=929562


in reply to How to process with huge data's

Well, you are using regular expressions heavily, which is a sure way to slow things down in the long haul, especially if they aren't designed properly. The REAL problem I think however, is the way you maintain the list of connections. You just keep appending to the $connectionText string and then doing multiple lookups/replaces on it. As the string gets longer and longer your regexps will get slower and slower. 700,000 short lines should not even really take hours to process unless on a real slow machine.

My first inclination would be to use a hash to store the connection data, though if you run into memory issues you may have to move to berkeleydb or sqlite or some other lookup tool that uses disk storage.

I ran the following perl -e 'for(1..700_000){$f{$_} = [1,2]} print "Done\n";sleep 10;' and only saw 170MB of RAM usage, so I'd guess you'd be fine using a hash.

Whatever you do, rethink your regexp use, you are using a sledgehammer to put in a screw.

                - Ant
                - Some of my best work - (1 2 3)