"be consistent" | |
PerlMonks |
Re^2: scalable duplicate file removerby spx2 (Deacon) |
on Mar 03, 2008 at 08:53 UTC ( [id://671607]=note: print w/replies, xml ) | Need Help?? |
First of all thank you very much for the critique,it is very well welcomed from my part. I will use it to improve the program. 1)why do you think the current method of opening the files does not yield correct results ? (I compared my results of SHA1s against sha1sum unix utilitary and they came out ok,that's why I'm asking). 2)you are right,I will do this 3)ok I understand,where could I read more about this ? 4)As I read the documentation and thinking that a number in base 10 should always present more digits than its representation in base 16 I dont understand how it could be shorter in base 10. I don't get why they say I will get a shorter string in a lower base. Also they talk about using a single sha1 object and reusing it because of the reset() method that can clear out the old data from it. Do you think this will speed up things ?
In Section
Cool Uses for Perl
|
|