Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

web page refreshing problem

by db2admin (Acolyte)
on Feb 08, 2003 at 08:50 UTC ( [id://233668]=perlquestion: print w/replies, xml ) Need Help??

db2admin has asked for the wisdom of the Perl Monks concerning the following question:

I seem to be having a problem with an image being cached after using the unlink function in Perl. The Perl script first unlinks an image file and then a rename is performed to move another image to the name of the image that is unlinked. The Perl script then prints to a web page the name of the file that is unlinked but should reflect the new image. I verified that the new image took the place of the old one but the web page does not reflect the new image and retains the old one. Upon opening another browser, I do see the new image as it should be displayed. Note, the html is not hard-coded into an html file but printed from within the Perl script. Does anyone have any suggestions as to how I might be able get around this problem? I have also tried use::copy but get the same results. Thank you.

David K.

Replies are listed 'Best First'.
Re: web page refreshing problem
by tachyon (Chancellor) on Feb 08, 2003 at 11:10 UTC

    Your problem is due to image caching which is done to improve browsing speed but causes problems when content changes. Images are cached by the browser locally and also potentially at one or more proxies between your browser and the website. You can supress caching using the Expires, Pramga: no-cache and Cache-control: no-cache directives in your http headers - but sadly this is not always respected by proxies and or browsers. For discussion see links at Re: http header / browser caching issues where you will find previous answers to this question with code and lots more discussion.

    #!/usr/local/bin/perl use CGI; $query = new CGI; print $query->header( -type => 'text/html', -expires => '-1d', -Pragma => 'no-cache', -Cache-control => 'no-cache' ); # this produces http headers which you could easily roll yourself # if you don't wnat to use CGI.pm or CGI::Simple: use POSIX; my $expires = POSIX::strftime("%a, %d %b %Y %H:%M:%S GMT", gmtime(time +()-24*3600) ); my $now = POSIX::strftime("%a, %d %b %Y %H:%M:%S GMT", gmtime(time +()) ); print "Expires: $expires Date: $now Cache-Control: no-cache Pragma: no-cache Content-Type: text/html\n\n" __DATA__ Expires: Fri, 07 Feb 2003 12:11:25 GMT Date: Sat, 08 Feb 2003 12:11:25 GMT Cache-Control: no-cache Pragma: no-cache Content-Type: text/html; charset=ISO-8859-1

    When you add these headers you will find that when using the back button you get a page has expired message and the image is pulled down from the website every time.

    tachyon

    s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

      Thank you. I will try your recommendation.

      David K.

        That doesn't seem to work. I used the POSIX recommendation and verified that the header info is printing the correct date to expire the image. Guess I'll keep searching for a solution.

        David K.

Re: web page refreshing problem
by cees (Curate) on Feb 08, 2003 at 15:36 UTC

    Just to add another debugging option for you to the excellent answer that tachyon provided above. To see if it is the browser that is caching the file, you can try a full refresh of the page by clicking the refresh button while holding down the ctrl or shift key (depending on the browser you are using). This should refresh every element on the page regardless of what is in the local cache.

    Of course, you could still hit a caching proxy server along the way, and it's possible that some browsers don't support the full refresh, but it might give some help.

Re: web page refreshing problem
by fsn (Friar) on Feb 09, 2003 at 02:00 UTC
    This sounds like a cacheing problem to me. You should tune your webserver to send HTMLheaders with all the correct no-caching -statements for the specific URL, like other suggested.

    There is another, sneaky and hacky, way around this which may or may not work. You could reference the image as http://www.theserver.se/pics/theimage.png?a-randomly-generated-string , where http://www.theserver.se/pics/theimage.png is the actual reference, and what's after the ? is a random garbage string, generated differently each access.

    Bascially you are sending an argument to the image. Of course, only CGI scripts can have arguments (not really, I think in some cases HTMLpages can take arguments), so the argument is never used (but it's there if you check the environment). But since you generate a new URL each time the cache will miss it and you make sure you get the correct image. To be honest I only tested this with Konqueror, but it works there.

    So, now you know the trick. Here's the downside: you will pollute the caches along the way. Now, the cache in your webbrowser, that will only hurt you and your users. But if there are caching proxies between the server and browser, some administrator will get angry as the hitrate declines. Some caches are configured to not cache URLs with ? or cgi-bin in them. You could maybe put them in the garbage string also. Perhaps.

    I've told you how you COULD do it, not how you SHOULD do it...

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://233668]
Approved by gjb
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having an uproarious good time at the Monastery: (2)
As of 2024-04-25 19:02 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found