http://qs321.pair.com?node_id=884583

NodeReaper has asked for the wisdom of the Perl Monks concerning the following question:

This node falls below the community's threshold of quality. You may see it by logging in.

Replies are listed 'Best First'.
Re: a large text file into hash
by marto (Cardinal) on Jan 27, 2011 at 15:43 UTC
Re: a large text file into hash
by BrowserUk (Patriarch) on Jan 27, 2011 at 20:19 UTC

    As others have pointed out, and as I tried to bring to your attention in your previous thread, you are simply generating too much data to hope to be able to load it all in memory in a 32-bit process.

    In a trivial experiment I conducted before responding to your first thread, I generated a 100MB file consisting of 2 million lines of 'phrases' generated randomly from a dictionary. I then counted the (1-4) n-grams and measured the memory used to hold them in a hash. I used a simple compression algorithm, and it still required 2GB of ram. I repeated the exercise for 150MB/3 million line file and it took 3GB.

    C:\test>head -n 2m phrases.txt > 884345.dat C:\test>884345-buk 884345.dat words178691 ngrams13962318 perl.exe 4564 Console 1 2,102 +,076 K C:\test>head -n 3m phrases.txt > 884345.dat C:\test>884345-buk 884345.dat words178691 ngrams20850624 perl.exe 5724 Console 1 3,185 +,344 K

    If this is in any way representative of your data, your 1GB file will consist of ~20 million lines and require 10GB of ram to hash.

    If you are using a 64-bit Perl and a machine with say 16GB of memory, then building an in-memory hash is a viable option.

    Otherwise, you will need to use something like BerkelyDB or a full RDBMS to hold your derived data.

    But the missing information from your both your threads, is how you are going to use this data? If this is one file that will be hashed once, or once in a blue moon, with the hash being re-used many times by long running processes, then building the hash and storing it on disk in storable format may be the way to go.

    On the other hand, if the hashed data going to be used by lots of short lived processes --eg. web pages--then the load time for a 10GB hash would be prohibitive.

    If you need to repeat the hashing process on many different large documents and will only use the hash to generate a few statistics for each, then a multi-pass batch processing chain probably makes more sense.

    Finally, if the process must be repeated many times; and you have a pool of servers at your disposal, or are prepared to purchase time on (say) Amazon's EC2, then tilly's map/reduce suggestion makes a lot of sense.

    As is often the case with such questions, picking the 'best' solution is very much dependant upon having good information about how the resultant data will be used.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Good observations and thanks! In fact I need it once to be created and then I will access it many times. so this one time processing for me takes lots of time however it's important to be accessed later easily and not very time consuming. I can have a big memory of 50 GB but still is not enough and it goes out of memory! I tried to create the hash and then tie it, it would not also possible for me or maybe I do some errors that it still goes out of memory!
      my $t = tie(%hash, Tie::IxHash); foreach my $line (@file){ $line_count++; my @ngrams=produce_ngrams($line); foreach my $ngram (@ngrams) { #$t->Push(@{ $hash{$ngram} } => $line_count); push(@{ $hash{$ngram} }, $line_count); } }
      I have also no idea if I tie the hash, later on how can I access it from my hard drive.
        I tried to create the hash and then tie it,

        Tie::IxHash a) doesn't store to disk; b) use 2 or 3 time as much memory as a standard hash. It's purpose is to remember the order in which the keys of the hash were added which is unnecessary for your use. You should not be using this module.

        If you are going the tie'd hash root, then you need to use a module that ties the hash to a disk file. Previously I'd have recommended BerkeleyDB, but since Oracle grabbed Sun, you have to sign up and agree to let them do whatever they want before they'll let you download anything.

        There are alternatives but I don't have much experience of them, so I cannot make a recommendation.

        But, if you have 50GB of ram available, then you ought to be able to hash your 1 GB file in memory with ease.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re: a large text file into hash
by tilly (Archbishop) on Jan 27, 2011 at 18:55 UTC
    You can't load it into an in memory hash because you have too much data. You could tie the hash to disk, but that will take a long time to load.

    Did you try my suggestion of using Search::Dict?

    I assume that you want it in a hash because you are planning on doing further processing on it. If that is the case, then I am going to strongly recommend that you try to think about your processing in terms of the whole map-reduce paradigm that I suggested. Because your data volume is high enough that you really will benefit from doing that.

    It takes practice to realize that, for instance, you can join two data sets by mapping each to key/value where the key is the thing you are joining on, while the value is the original value and a tag saying where it came from. Then sort the output. Then it is easy to pass through the sorted data and do the join.

    You have to learn how to use this toolkit effectively. But it can handle any kind of problem you need it to - you just need to figure out how to use it. And your solutions will scale just fine to the data volume that you have.

      Thanks, I'm trying to use your suggested method! the first step created a 18 GB file and sorting it takes lots of time! I could finally sort it and I'm now going to the third step which is creating the last file of $ngram: @line_number. and try to see how can I access it using Search::Dict.
      my main usage is that I can have two big files in that way and then calculate some statistics such as Mutual Information from those big files. so as long as I can have the line numbers of each n-gram for both files I try to see how to handle it using search::dict.
        Let's see, 18 GB, with a billion rows, so let's say 30 passes, each of which has to both read and write, streaming data at 50 MB/sec takes about 6 hours. It should not be doing all of those passes to disk. Your disk drive is likely to be faster than that. But in any case that is longer than I thought it would take. Sorry.

        The last step should make the file much smaller. How much smaller depends on your data.

        Anyways back to Search::Dict, it works by doing a binary search for the n-gram you are looking up. So you can give it the n-gram and it will find the line number for you. However it is a binary search. If you have a billion rows, it has to do 30 lookups. Some of those will be cached, but a lot will be seeks. Remember that seeks take about 0.005 seconds on average. So if 20 of those are seeks, that is 0.1 seconds. Doesn't sound like much, until you consider that 100,000,000 of them will take 115 days.

        By contrast 100 million 50 byte rows is 5 GB. If you stream data at 50 MB/second (current drives tend to be faster than that, your code may be slower), then you'll need under 2 minutes to stream through that file.

        If you have two of these files, both in sorted form, it really, really makes sense to read them both and have some logic to advance in parallel. Trust me on this.