Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Re^2: Find duplicate files.

by mimosinnet (Beadle)
on Apr 02, 2012 at 14:13 UTC ( [id://963014]=note: print w/replies, xml ) Need Help??


in reply to Re: Find duplicate files.
in thread Find duplicate files.

I am new to perl (and to writing code) and I have just been in an excellent course organized by Barcelona_pm. I have rewritten lemming code as an exercise of using Moose. To improve speed, following above suggestions, files with similar size are first identified and, afterwards, md5 value is calculated in these files. Because this is baby-code, please feell free to recomend any RTFM $manual that I sould review to improve the code. Thanks for this great language!

(I have to thank Alba from Barcelona_pm for suggestions on how to improve the code).

This is the definition of the object "FileDups"

package FileDups; use Digest::MD5; use Moose; use namespace::autoclean; has 'name' => (is => 'ro', isa => 'Str', required => 1,); has 'pathname' => (is => 'ro', isa => 'Str', required => 1,); has 'max_size' => (is => 'ro', isa => 'Num', required => 1,); has 'big' => (is => 'rw', isa => 'Bool', required => 1, default = +> 0); has 'unread' => (is => 'rw', isa => 'Bool', required => 1, default = +> 0); has 'dupe' => (is => 'rw', isa => 'Bool', required => 1, default = +> 0); has 'md5' => (is => 'ro', isa => 'Str', lazy => 1, builder = +> '_calculate_md5'); has 'size' => (is => 'ro', isa => 'Num', lazy => 1, builder = +> '_calculate_size'); sub _calculate_size { my $self = shift; my $size = -s $self->name; if (-s $self->name > $self->max_size) { $size = $self->max_size; $self->big(1); } return $size; } sub _calculate_md5 { my $self = shift; my $file = $self->pathname; my $size = $self->size; my $chksum = 0; if ($size == $self->max_size) { $chksum = 'a'x32; } else { my $fh; unless (open $fh, "<", "$file" ) { $self->unread(1); return -1; #return -1 and exit from subrutine if file can +not be opened } binmode($fh); $chksum = Digest::MD5->new->addfile($fh)->hexdigest; close($fh); } return $chksum; } ;1

And this is the main package that lists duplicate files, big files and unread files.

#!/usr/bin/env perl # References: # http://drdobbs.com/web-development/184416070 use strict; use warnings; use File::Find; use lib qw(lib); use FileDups; use Data::Dumper; # Hash of => [array of [array]], [array of objects] my (%dup, %sizes, @object, $number_files, $number_size_dups); my $max_size = 99999999; # Size above of whitch md5 will n +ot be calculated my $return = "Press return to continue \n\n"; my $line = "-"x70 . "\n"; while (my $dir = shift @ARGV) { # Find and classify files die "\"$dir\" is not a directory. Give me a directory to search\n" + unless (-d "$dir"); File::Find::find (\&wanted,"$dir"); } print "\n"; foreach (@object) { # Calculates md5 for files with equ +al size if ($sizes{$_->size} == "1") { $number_size_dups += 1; print "$number_size_dups Files with th +e same size \r"; $_->dupe(1); # The object has another object with t +he same size $_->md5; # Calculates md5 } } foreach (@object) { # Creates a hash of md5 values if ($_->dupe == 1) { # for files with the same size if (exists $dup{$_->md5}) { push @{$dup{$_->md5}}, [$_->size, $_->name, $_->pathname]; } else { $dup{$_->md5} = [ [$_->size, $_->name, $_->pathname] ]; } } } print "\n\nDuplicated files\n $line $return"; my $pausa4 = <>; foreach (sort keys %dup) { # sort hash by md5sum if ($#{$dup{$_}} > 0) # $_ = keys { # if we have more than 1 array whithin th +e same hash printf("\n%8s %10.10s %s\n", "Size", "Name", "Pathname"); foreach ( @{$dup{$_}} ) # $_ = keys, $dupes{keys} = +list of references (scalars) { # iterate trough the first dimension of t +he array printf("%8d %10.10s %s\n",@{$_}); # dereference referen +ce to array } } } my $r1 = &list_files("Big files","big",@object); # List big files my $r2 = &list_files("Unread files","unread",@object); # List unrea +d files sub wanted { return unless (-f $_); my $file = FileDups->new(name => $_, pathname => $File::Find::name +, max_size => $max_size); $number_files += 1; print "$number_files Files seen\r"; if ($file->size == $max_size) { # Identifies big files $sizes{$file->size} = "0"; # We do not check md5 for bi +g files } elsif (exists $sizes{$file->size}) { # There are more the +n one file with this size $sizes{$file->size} = "1"; } else { $sizes{$file->size} = "0"; # This is a new size value, +not duplicated } push @object, $file; # Puts the object in the @obje +ct array } sub list_files { # List objects according to criter +ia: my ($title,$criteria,@object) = @_; # (a) big files; (b) +unread files print "\n \n $title \n" . $line; my $pausa = <>; foreach (@object) { if ($_->$criteria) { printf(" %10.10s %s\n",$_->name,$_->pathname); } } print $line; }

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://963014]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (4)
As of 2024-04-19 23:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found