http://qs321.pair.com?node_id=417081

bowsie has asked for the wisdom of the Perl Monks concerning the following question:

Hello Holy Ones.
I hope you can help me out here. I've had a few brains scratching over this one, and while there are some solutions, none of them are very good.

I've got a file where each line represents one element, and the entries on the line are the matches to that element. The number of matches is arbitrary. I need to "network" those matches to find a non-redundant set of elements based on at least one element in common between lines. Here's an example:

Infile:
a b c d e
f b g
h i j k l
m f

I want to say that because lines 1 and 2 both contain "b", 1=2, and because lines 2 and 4 contain f, 2=4. Using the concept of if a=b and b=c then a=c, we can say that (1, 2, 4) is one complete set. Line 3 has no matches in common to any of the other lines, and so a second set is (3)

In the end I need a count of these sets.

I know to read the lines into a hash - the keys are the line numbers (for lack of something better) and the values are an array of the elements. I was thinking along the lines of needed to traverse the hash in an outer loop to grab an entry, then an inner loop traverse the hash once more looking for matches. When I find a match I can merge the values between the two hash entries and delete the inner-loop entry.

However, if I do this, the new value defined in the outer-loop hash entry will not be fully traversed, and I will run into problems there.

I have a solution that works in a 2-pass system where the first pass records pairs of line numbers with matches and the second pass networks those pairs. However, it's dead slow and while my little example is, well, little... I have large datasets to handle (think 200,000 lines with up to 10 matches per line!)

Any insights?

Thanks!