http://qs321.pair.com?node_id=1233618


in reply to Data compression by 50% + : is it possible?

Impossible! If your data is truly random.

You need a gnat's under 6.5 bits to represent each of your values and are currently using 8 bits. If you could pack it exactly (meaning using 1/2 bits), then the best you could achieve is 6.5/8 = 18.75%.

If you try dictionary lookup:

  1. The dictionary for 2 byte pairings requires 13 bits; no saving over 16.
  2. 3 byte pairings require 19.5, no saving over 24.
  3. Best you could do is using a 40-bit number to represent each of the 531441000000 6-byte pairings; giving 40/48 16.67% saving.

Beyond that, your into the luck of the draw. Some random datasets might contain enough common (sub)sequences to allow Lempel–Ziv or similar to get close to 50%, but for other datasets, the same algorithm will produce barely any reduction, (or even an increase).

See Kolmogorov_complexity for more.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
In the absence of evidence, opinion is indistinguishable from prejudice. Suck that fhit