I'm having trouble understanding your terminology. When you say you want to "push the downlink load even further", do you mean you want to reduce the downlink bandwidth requirement even further? e.g. reduce the downloaded file size more?
I'm not sure, but my guess is that the solution is probably rather domain dependent, and in particular depends on how much a priori knowledge is shared by both client and server.
Any chunk of information that is already known to both could be "compressed" as a single short datum (e.g. UUID).
For example - if the data being downloaded is always similar (e.g. genome sequence data, or astronomic imagery), then you could potentially always use the same dictionary, or set of dictionaries, for compression; then store those dictionaries on both client and server and not download it.
Is there a "sweet spot" for this? Not that I know of. You might have to determine one for your specific scenario statistically.
I reckon we are the only monastery ever to have a dungeon staffed
with 16,000 zombies