http://qs321.pair.com?node_id=347375


in reply to Re: Bloom::Filter Usage
in thread Bloom::Filter Usage

This is exactly the approach that I'm going to have to take -- and it fits rather well with the logic of the process (I'm dealing with an ETL (Extract, Transform, Load) system and there are multiple stages to each job).

FamousLongAgo very kindly sent me the updated v 0.2 code to try out and it definitely works, but unfortunately the way that I'm trying to use it doesn't because the number of duplicates is very low and the population very large. This helps me to end up with with a massive bitmap (431329181 bits) and ten hashing functions. If I knew more about hashing functions I might have been able to come up with a way to accelerate the hashing function by (as was suggested by others in this thread) optimising it for numeric keys of a preset size (12 bytes).

As it stood, however, the level of overhead reduced the filter to the point where it took five to ten seconds per key!

I really like the approach of using uniq -d and can only wish that it had occurred to me a couple of days ago since I would have managed to skip the banging of hand to head that just happened. There's enough memory and swap space to support sorting on 30 million records (this machine even has SyncSort installed).

Thank you everyone for your helpful tips and suggestions.