Pathologically Eclectic Rubbish Lister | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
> Just to settle "is the code right" issue. The code is right.
The question is rather if the distribution of values is like the one simulated by your random numbers. The other one if you need the "readable" character range or if a binary file is OK. Please look at the probability output from the script I posted here and tell us if it's accurate, or even better calculate the frequencies of groups from your real data. I expect a lossless 50% reduction to be easy, because of the unused gaps in your data. A better compression will need Huffman coding, but for this to work you need the frequency table anyway. FWIW there are two Huffman modules on CPAN and one script here in the archives.
updateAnd please post proper replies, I only found your update by accident.
Cheers Rolf
In reply to Re: Data compression by 50% + : is it possible?
by LanX
|
|