Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

Re: Recursive search for duplicate files

by sh1tn (Priest)
on Nov 27, 2007 at 13:35 UTC ( [id://653218]=note: print w/replies, xml ) Need Help??


in reply to Recursive search for duplicate files

Much better way is to use MD5 for file comparison.


  • Comment on Re: Recursive search for duplicate files

Replies are listed 'Best First'.
Re^2: Recursive search for duplicate files
by moritz (Cardinal) on Nov 27, 2007 at 13:41 UTC
    If used naively, that doesn't work out well for large files, because they have to be read from disc entirely.

    If you care about performance, you might just want to hash the first 5% (or the first 1k or whatever) and see if there are any collisions, and if there are you can still look at the entire file.

      I can agree. Another measure, taking in mind the performance, can be the filesize comparison before all.


Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://653218]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having an uproarious good time at the Monastery: (7)
As of 2024-04-19 10:09 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found