http://qs321.pair.com?node_id=1082360

BUU has asked for the wisdom of the Perl Monks concerning the following question:

I'm attempting to implement a basic hierarchical agglomerative clustering algorithm to be used to create an arbitrary number of clusters from an existing dataset. More reading about the general concept can be found at at http://cgm.cs.mcgill.ca/~soss/cs644/projects/siourbas/sect5.html or your friendly neighborhood google.

A word about the dataset.

My data consists of some 1500-5000 "items" each of which contains a set of "words". These words are 5-30 character strings. Each set of words contains no duplicates. There are between 5-100 "words" in a set.

Some words about the existing code.

The theoretical complexity of such an algorithm is something like O(cn2d2) but I suspect my implementation is considerably worse since I ran it for over 11 hours and it only managed to consolidate 500 of the 1600 items.

The "merge" function is obviously very silly, I wrote it without thinking very hard and it doesn't do much. On the other hand I don't think it impacts the performance.

The vast majority of the time spent is going to be in the max_diff function, which appears to get exponentially slower as the program continues to run.

The datastructure being produced is necessary, that is it should be a binary tree made of array-refs where each leaf is either another tree or an actual item. (Its necessary because we don't know how many clusters we want to produce).

Suggestions for optimizations or even different algorithms gratefully received.

while( @$items > 10) { my( $item1, $item2 ); my( $item1_idx, $item2_idx ); my $difference = 9999; #Arbitrary large number for my $i ( 0 .. $#$items ) { my $d1 = $items->[$i]; for my $j ( 0 .. $#$items ) { my $d2 = $items->[$j]; next if $i == $j; my $diff = max_diff( $d1, $d2 ); if( $diff < $difference ) { $difference = $diff; ($item1,$item2) = ($d1,$d2); ($item1_idx,$item2_idx) = ($i,$j); } last if $difference == 0; } last if $difference == 0; } splice( @$items, $item1_idx, 1 ); splice( @$items, $item2_idx, 1 ); my $c = merge( $item1, $item2 ); push @$items, $c; print " \r"; print scalar @$items, "\r"; } sub merge { my( $x, $y ) = @_; # Both non-clusters if( ref $x eq 'HASH' and ref $y eq 'HASH' ) { return [$x,$y]; } # $x cluster elsif( ref $x eq 'ARRAY' and ref $y eq 'HASH' ) { return [$x,$y]; } # $y cluster elsif( ref $x eq 'HASH' and ref $y eq 'ARRAY' ) { return [$y,$x]; } elsif( ref $x eq 'ARRAY' and ref $y eq 'ARRAY' ) { return [$x,$y]; } else { die "Wtf? $x $y"; } } sub max_diff { my( $d1, $d2 ) = @_; #my %x1 = map { $_->name, undef } $d1->words; #my %x2 = map { $_->name, undef } $d2->words; if( ref $d1 eq 'HASH' and ref $d2 eq 'HASH' ) { my %x1 = %{$d1->{words}}; my %x2 = %{$d2->{words}}; my %y1 = %x1; my %y2 = %x2; delete @x1{keys %x2}; delete @y2{keys %y1}; return( ( scalar keys %x1 ) + ( scalar keys %y2 ) ); } elsif( ref $d1 eq 'ARRAY' and ref $d2 eq 'HASH' ) { my $x = max_diff( $d1->[0], $d2 ); my $y = max_diff( $d1->[1], $d2 ); return $x > $y ? $x : $y; } elsif( ref $d1 eq 'HASH' and ref $d2 eq 'ARRAY' ) { my $x = max_diff( $d2->[0], $d1 ); my $y = max_diff( $d2->[1], $d1 ); return $x > $y ? $x : $y; } elsif( ref $d1 eq 'ARRAY' and ref $d2 eq 'ARRAY' ) { my $x = max_diff( $d1->[0], $d2->[0] ); my $y = max_diff( $d1->[1], $d2->[1] ); my $xx = max_diff( $d1->[0], $d2->[1] ); my $yy = max_diff( $d1->[1], $d2->[0] ); return max( $x, $y, $xx, $yy ); } else { die "Wtffffff $d1 $d2"; } }

Replies are listed 'Best First'.
Re: Optimizing a naive clustering algorithm
by RichardK (Parson) on Apr 15, 2014 at 17:38 UTC

    I haven't read about the concept (yet!) so I'm just commenting on your code.

    Copying and manipulating those hashes in max_diff is going to be slow, lots of memory copies, and if I've understood correctly you don't need to do it that way. Wouldn't something like this give you the number you need?

    sub max_diff { ... my $count = 0; for (keys %{$hash1}) { $count++ unless exists $hash2->{$_}; } for (keys %{$hash2}) { $count++ unless exists $hash1->{$_}; } return $count;
      Ha, yes, I think you're right. I don't think it solves the overall problem but its a good catch.
Re: Optimizing a naive clustering algorithm
by roboticus (Chancellor) on Apr 16, 2014 at 00:40 UTC

    BUU:

    I've played around with your code a bit for fun. In one version, I used RichardK's suggestion, and gave a bit of a speedup. (I never let it run to completion, and each iteration gets slower and slower....). I then added caching so you don't need to compare the same pair of hashes more than once. That gave a significant speedup.

    I then tried oiskuu's suggestion of building a half triangular distance matrix, and made a slightly different distance check, and it was able to cluster 1700 items of 5-100 words in about 25 minutes. (I didn't pay much attention to the word lengths, I just randomly selected a population of words from the dictionary.) I'm kind of curious about what results other monks'll get.

    ...roboticus

    When your only tool is a hammer, all problems look like your thumb.

      Wow, that's pretty impressive. I'm going to have to study it for a while.

        BUU:

        You'll want to figure out how to compute a good distance metric, as it's definitely not the same one you used any longer. You were using the maximum difference between nodes, and I clustered them by proximity, as described in the paper.

        ...roboticus

        When your only tool is a hammer, all problems look like your thumb.

Re: Optimizing a naive clustering algorithm
by oiskuu (Hermit) on Apr 15, 2014 at 19:54 UTC
      Pure perl is not required, but the modules I found on cpan seemed awfully specific to their niche types of data, usually dna or genome related things. I've actually just implemented a slightly similar idea in pure sql.