http://qs321.pair.com?node_id=468071

Recently I learned I had duplicate copies of pictures from my camera on my system. Approximately 990 of them. Rather than compare two pictures side by side, I decided to compute the SHA1 (160 bit) checksums of the 'duplicate' pictures and compare that the SHA1 checksums of the 'master' files.

I'm sure my code could use some polishing. Comments welcome.

Update - Thanks to all for the comments, suggestions to make it work more efficiently. When I wrote this, it was written as a one time shot. The duplicate files got there in the first place through some wizardry of BOFH.

#!/usr/bin/perl -w use strict; use File::Find; my $dupedir ='/Big-Drive/NIKON-Pictures/Duplicates/'; my $count; my $calccount; my @files; my @calcsum; my $compared; open(OUT,">results.txt") || die "Can't open file $!"; find(\&files, $dupedir); &calcsum; # &printsums; &comparefiles; print "$count total files found\n"; print "$calccount total files calculated SHA1 sums\n"; print "$compared total files compared to originals\n"; sub files { return unless -f $File::Find::name; $count++; push (@files, "$File::Find::name"); } sub calcsum{ foreach(@files){ print "Computing sha1sum for $_\n"; push(@calcsum, `sha1sum $_`); $calccount++; } } sub printsums{ foreach(@calcsum){ print; } } sub comparefiles{ my $sum; my $file; my $calcsum; my $rest; foreach (@calcsum){ ($sum, $file)=split /\s+/; $file =~ s/Duplicates\///; if ( -f $file){ print "Calculating SHA1 checksum for file $file"; print OUT "Calculating SHA1 checksum for file $file"; ($calcsum, $rest)=split /\s+/,`sha1sum $file`; if ( $calcsum eq $sum){ print " ----> OK !\n"; print OUT " ----> OK !\n"; $compared++; }else{ print "\n****** ERROR ****** Checksums do not match for $file\ +n "; print OUT "\n****** ERROR ****** Checksums do not match for $f +ile\n "; } }else{ print "$file not in master directory ... skipping\n"; print OUT "$file not in master directory ... skipping\n"; } } } close OUT;