Clear questions and runnable code get the best and fastest answer |
|
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
I've run into an interesting problem while testing a new piece of code I'm writing. During testing, I pointed my code at various dozens of websites; static content, dynamic content, images, pdfs, etc. and it all worked great. I was checking the remote end's Content-Type header and their Content-Length header using HEAD, to see if I should fetch it or not. Basically if the size reported in Content-Length was too large, I'd ignore the fetch.
This was working great, until I realized that a lot of servers don't send a Content-Length header. DOH! Even sites serving static, flat text or html content, are not sending a Content-Length header. In the above snippet, I'm using HEAD, so as to avoid using a GET request on larger files, and then ignore the processing of them after I'd already fetched them. So I started trying to figure out a way to determine the length of the remote content, without actually fetching the content itself, and this is where I'm stuck. I could do this:
But now I'm doing a GET, and if someone decides to point that to a 20-gigabyte file, or a DVD iso or something like that, it'll drown my bandwidth, and DDoS my tool for other users. Is there some other way to do this, without doing a full fetch of the remote resource? Update: This sort-of works, but for sites without a Content-Length header, I do a double-hit, HEAD first, then GET second. Is there a better way?
|
|