http://qs321.pair.com?node_id=884368


in reply to Re^2: statistics of a large text
in thread statistics of a large text

Your guess is wrong.

You asked for advice on handling large amounts of data (~ 1 GB). With that much data your code will fail to run because it will run out of memory long before you finish. By contrast the approach that I describe should succeed in a matter of minutes.

If you wish to persist in your approach you can tie hash to an on disk data structure, for instance using DBM::Deep. Do not be surprised if your code now takes a month or two to run on your dataset. (A billion seeks to disk takes about 2 months. And you're going to wind up with, order of magnitude, about that many seeks.) This is substantially longer than my approach.

If my suggestion fails to perform well enough, it is fairly easy to use Hadoop to scale your processing across a cluster. (Clusters are easy to set up using EC 2.) This approach scales as far as you want - in fact it is the technique that Google uses to process copies of the entire web.