Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Removing Duplicate Files

by tomazos (Deacon)
on Jan 08, 2005 at 11:24 UTC ( [id://420501]=perlquestion: print w/replies, xml ) Need Help??

tomazos has asked for the wisdom of the Perl Monks concerning the following question:

I have a directory tree that contains a total of 4,000 files and a total of 20 gig of data.

Most files is have either 0, 1 or 2 duplicates within this directory.

I want to remove duplicates so that each file occurs only once.

Any ideas? Sounds like a 1-liner challenge to me. :)

Regards,
Andrew.


Andrew Tomazos  |  andrew@tomazos.com  |  www.tomazos.com

Replies are listed 'Best First'.
Re: Removing Duplicate Files
by borisz (Canon) on Jan 08, 2005 at 11:49 UTC
    Ok, morning fun.
    #!/usr/bin/perl my $dir = '/your/directory'; package MyFile; use Digest::MD5; sub new { my ( $class, $name ) = @_; my $self = { name => $name }; bless $self, $class; } sub name { $_[0]->{name} } sub md5 { my $self = shift; $self->{md5} ||= do { open my $fh, "<", $self->name or die $!; binmode $fh; Digest::MD5->new->addfile($fh)->digest; }; } package main; use File::Find; my %files; File::Find::find( { wanted => sub { -f && push @{ $files{ -s _ } }, MyFile->new($File::Find::name); } }, $dir ); my @drop; for ( keys %files ) { my %t; if ( @{ $files{$_} } > 1 ) { for ( @{ $files{$_} } ) { push @drop, $_->name if $t{ $_->md5 }++; } } } local $, = $/; print @drop;
    Boris
Re: Removing Duplicate Files
by gopalr (Priest) on Jan 08, 2005 at 12:20 UTC

    Hi

    @ARGV = 'c:/temp/'; use File::Find (); $whole=''; sub find(&@) { &File::Find::find } *name = *File::Find::name; find { if ($name=~m#^(.*/)([^/]+$)#) { my $path=$1; my $file=$2; if ($whole=~m#<file>\Q$file\E</file>#si) { unlink("$name") or warn "couldn't unlink $name: $!"; print "\ndeleted $name"; } else { $whole.="\n".'<path>'.$path.'</path><file>'.$file.'</file> +'; } } } @ARGV;

    Thanks,

    Gopal.R

Re: Removing Duplicate Files
by dave_the_m (Monsignor) on Jan 08, 2005 at 11:38 UTC
    By "duplicate file" I'm guessing you mean files with identical content, rather than files with the same name but in different subdirectories??

    In the former case, read in each file, calculate an MD5 checksum of its contents, and use that as a key to a hash, whose values are the pathnames.

    Dave.

Re: Removing Duplicate Files
by zentara (Archbishop) on Jan 08, 2005 at 13:40 UTC
    This dupfinder may be useful to you, since it samples big files first, before doing a md5sum, which would be good with large files.

    I'm not really a human, but I play one on earth. flash japh
Re: Removing Duplicate Files
by ambrus (Abbot) on Jan 08, 2005 at 13:49 UTC
Re: Removing Duplicate Files
by gaal (Parson) on Jan 08, 2005 at 18:01 UTC
    Not a one-liner.

    You need to partition all files by size. Then for each size, if there are exactly two files, compare their contents and possibly delete one. If there are more than two files, you have to partition the files of the current size by their fingerprint (CRC, MD5, SHA1, whatever); and for each partition compare-and-possibly-delete-one between all pairs (each partition possibly shrinking as you go, and thus candidate comparisons may be cancelled before they are performed).

    This, first of all, is safe. No file is mistakenly deleted because a deletion always follows a full content compare. We don't trust the fingerprint function, but rather use it as an indication that something is supicious. Then, it is reasonably efficient. No comparisons are made if they are bound to fail.

    If this were a litte smaller, we *might* have been able to get away with reading a block of each file simultaneously. But as it is we'll run out of file handles first thing :)

Re: Removing Duplicate Files
by graff (Chancellor) on Jan 08, 2005 at 23:31 UTC
    Well, this is kind of a long line, and it assumes that you have the gnu or bsd "find", "xargs" and "md5" available on your system as standard utilities (in addition to perl). It could be shorter, but for a few extra characters in the script to use file size together with MD5, you get a lot of extra safety (it's possible to get the same MD5 signature from two files with different contents, but this is far less likely when the files are the same size).
    # output of "md5" is one line per file: "MD5 (filename) = signature" find . -type f -print0 | xargs -0 md5 | perl -ne '/MD5 \((.*)\) = (\S+)/ or next; ($f,$m)=($1,$2); $s=-s $f; +if($h{"$m $s"}){unlink $f} $h{"$m $s"}++'
    As suggested by others, you can use the Digest::MD5 module in the perl script (and make the script a bit longer), in case you want to save run time by only computing MD5 signatures on sets of files that are the same size.
Re: Removing Duplicate Files
by bart (Canon) on Jan 08, 2005 at 18:07 UTC
    Sometimes not using perl, or even writing your own, is the better idea. As recommended on MarkTAW.com: Doublekiller. Assuming you're running on Windows.
Re: Removing Duplicate Files
by tfrayner (Curate) on Jan 09, 2005 at 12:19 UTC
    Dupseek is a pretty good Perl implementation of what you're after, which has been around for a while now. I've never tested in on a data set of this size, though.

    Tim

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://420501]
Approved by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others contemplating the Monastery: (6)
As of 2024-03-28 11:26 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found