Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

making a loop script with a remote URL call faster

by brandonm78 (Initiate)
on Jan 15, 2022 at 02:09 UTC ( [id://11140461]=perlquestion: print w/replies, xml ) Need Help??

brandonm78 has asked for the wisdom of the Perl Monks concerning the following question:

Hi Monks,

Hoping you can help give me some suggestions on how to speed up this script to avoid the remote URL lag I'm experiencing every 60 seconds.

I am using:

my pricing = 0; my $time = 0; while (1) { if (time() >= ($time + 60)) # updates the pricing on +initial run and every 60 seconds. { Net::Curl to remote URL... request here. $pricing = [from net::curl] $time = time(); } ... continue on with my perl code (needs the $pricing variable to work +) }

The issue I have is that every 60 seconds I am hit with a lag to my script due to the remote URL call. I was wondering if there may be a way to perhaps turn the "Net::Curl" call into a server side localhost script that is updating the pricing constantly and that listens for the client (my main script) and immediately responds with no lag so speeding up the process. Or perhaps Fork off the Net::Curl call inside my main script and update it on a future loop once I get the response back from the remote URL. Max of 1 child process at a time.

If you have any thoughts on how best to accomplish this, code examples, pointers, please let me know. Your help is appreciated.

Thank you

Replies are listed 'Best First'.
Re: making a loop script with a remote URL call faster
by stevieb (Canon) on Jan 15, 2022 at 17:35 UTC

    I have a microcontroller on my garage wall that displays information about my Tesla car, with a visible charge level and an audible alarm if the charge is below a certain point so I don't forget to plug the car in.

    The device turns on only when there is motion in the garage. It reaches back every one second to a computer in my house to fetch updated data about the car. The computer in the house serves that device the data, but it also fetches that data from Tesla's API. While there is motion (ie. the microcontroller is asking for updated data), the computer returns whatever data it has available, while repeatedly fetching data from Tesla in case it changes. The fetching from Tesla happens as fast as when one pull is done, another starts immediately. This data is fetched in a separate process than the process that returns existing data to the microcontroller. When the Tesla data updates, the shared variable is updated, and the next return to the microcontroller has this new data.

    There is no lag or delay, thanks to the separate process. It also means that the call from the controller to the server is always extremely consistent with no delay.

    Here is an extremely (!) simplified version of that you might be able to use as an example. It uses my IPC::Shareable distribution to create the shared memory backed variable that's used between the two processes, and my Async::Event::Interval for the external async process used to fetch the data from the website.

    Feel free to ask any questions. I've put this together rather hastily so I'm sure I may not be explaining things very well

    use strict; use warnings; use Async::Event::Interval; use Data::Dumper; use IPC::Shareable; use JSON; use LWP::UserAgent; my $ua = LWP::UserAgent->new; my $url = 'http://tesla:55556'; # Create a shared scalar string that will hold JSON # data that gets updated by the async event below, which # runs in a separate, unrelated process tie my $fetched_data, 'IPC::Shareable', { key => 'TESLA', create => 1, destroy => 1, }; $fetched_data = ''; # Create an async event that runs in the background. The # 0 parameter means it won't run on an interval, we have to # manually start it each iteration of the loop. We 'start' # it here to send it off on its first run so we have initial # data my $tesla_event = Async::Event::Interval->new(0, \&update_data); $tesla_event->start; my $previous_time = time; my $previous_tesla_data = {charge => -1}; while (1) { # If the background URL fetch is done, start it again. It's probab +ly wise # to time yours to run just prior to your 60 second loop cycle tim +er # expiring. No sense constantly hitting the site every second if y +ou # don't need the data that quick! $tesla_event->start if $tesla_event->waiting; my $current_time = time; if ($current_time - $previous_time > 3) { my $tesla_data = decode_json($fetched_data); if ($tesla_data->{charge} != $previous_tesla_data->{charge}) { # Do stuff with the result print Dumper $tesla_data; $previous_tesla_data = $tesla_data; } $previous_time = $current_time; } } sub update_data { my $server_response = $ua->get($url); if ($server_response->is_success) { $fetched_data = $server_response->decoded_content; } }

    Output. The first output is when my car was asleep. The next one was after I sent a wakeup call to it.

    $VAR1 = { 'fetching' => 0, 'charge' => 0, 'charging' => 0, 'online' => 0, 'error' => 0, 'gear' => 0, 'garage' => 0, 'rainbow' => 0 }; $VAR1 = { 'fetching' => 0, 'charge' => 61, 'charging' => 0, 'online' => 1, 'error' => 0, 'gear' => 0, 'garage' => 1, 'rainbow' => 0 };
Re: making a loop script with a remote URL call faster
by LanX (Saint) on Jan 15, 2022 at 02:35 UTC
    > remote URL lag

    I'm not sure what you mean, obviously you want to poll only every 60 seconds.

    Personally I'd prefer sleep 60; instead of burning the CPU with an infinite loop.

    Your script will wake up in time, and other processes can use the CPU in the meantime.

    Now if your problem is that Net::Curl takes too long, measure that time and subtract it sleep 60-$lastcurl;

    You'll need Time::HiRes for that kind of accuracy.

    use Time::HiRes qw (time sleep );

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery

      I need to burn the CPU with an infinite loop. I'm looking to avoid the 0.2 second time it takes Net::Curl to respond with the pricing from the API it's calling. Every 60 seconds I have a 0.2 second delay. I'm hoping to avoid that by forking off that Net::Curl call or perhaps making that Net::Curl call part of a another script that my infinite loop can call via localhost. . Sleep would do the opposite of what I want to accomplish.
        I have problems to understand your code. Your indentation is misleading, most probably because you mix tab with whitespace.

        Your infinite loop will execute "continue on with my perl code" many times with old $pricing till it's finally updated again after 60 sec. Is this really what you want? Or should it rather only be executed again if $pricing was updated?

        Anyway

        > I'm looking to avoid the 0.2 second time it takes Net::Curl to respond with the pricing from the API it's calling.

        you should update the $time just after the if-condition, before calling Net::Curl.

        Like this you won't include the lag

        my pricing = 0; my $time = 0; while (1) { if (time() >= ($time + 60)) { # updates the pricin +g ... $time = time(); # start interval # ... Net::Curl to remote URL... request here. $pricing = [from net::curl]; } # ... continue on with my perl code # even with old pric +ing }

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        Wikisyntax for the Monastery

        More exact timing can be achieved by properly calculating the interval. This is a rough sketch, taken from memory

        #/usr/bin/env perl use strict; use warnings; # We need sub-second precision here :-) use Time::HiRes wq(time sleep); my $lastrun = 0; while(1) { # Time consuming stuff here my $now = time; # Calculate the time we need to sleep. First we calculate the dif +ference between now and the last run. # That's how long the last run took. Now, calculate how many seco +nds remaining in the current minute. # If the answer is negative, one of two things happened: Either i +t's our first run, or the last run took # longer than a minute my $sleeptime = 60 - ($now - $lastrun); if($sleeptime > 0) { sleep($sleeptime); } $lastrun = $now; }

        If you want the code to run at a specific second in every minute, you could also do that. Again, this is untested and from memory:

        #/usr/bin/env perl use strict; use warnings; # We need sub-second precision here :-) use Time::HiRes wq(time sleep); my $activesecond = 42; # Run whenever the seconds are "42" while(1) { my $now = time; # The built-in modulus function converts to integer, # which would introduce jitter of up to nearly a second. # So, out with the traditional: # my $cursecond = $now % 60; # ...and in with the more manual version: my $cursecond = $now - (60.0 * int($now / 60.0)); if($cursecond != $activesecond) { # Need to wait my $sleeptime = $activesecond - $cursecond; if($sleeptime < 0) { # Handle rollover $sleeptime += 60.0; } sleep($sleeptime); } # Time consuming stuff here }

        Hope that helps a bit.

        Edit: If, for some strange reason you need to use International Atomic Time, you need to take the UTC/TAI offset into account. This is currently 37 seconds, but you will need to consult the IERS Bulletin C twice a year to check for leap second announcements. But in essence, all you need to do is add the appropriate offset when assigning $now:

        my $taioffset = 37; ... my $now = $time + $taioffset;

        perl -e 'use Crypt::Digest::SHA256 qw[sha256_hex]; print substr(sha256_hex("the Answer To Life, The Universe And Everything"), 6, 2), "\n";'
Re: making a loop script with a remote URL call faster
by duelafn (Parson) on Jan 15, 2022 at 15:05 UTC

    I'm with LanX, I don't really see how it would make sense to iterate the loop multiple times if price doesn't change, but if you really want a tight loop, everyone loves threads:

    #!/usr/bin/perl -w use strict; use warnings; use threads; use threads::shared; my $PRICING :shared = 0; my $RUNNING :shared = 1; sub price_thread { while ($RUNNING) { # Local variable to limit lock time. # Catch errors in eval my $pricing = eval { # Net::Curl to remote URL... request here. my $val = 1000 * rand(); # [from net::curl] # ... more? # last line will go to $pricing and $pricing gets undef on + exception $val }; do { lock($PRICING); $PRICING = $pricing; }; # Optional pause between requests: sleep 60; } } sub main_loop { # Local variable to limit lock time and prevent changes mid-comput +ation my $pricing = 0; while ($RUNNING) { do { lock($PRICING); $pricing = $PRICING; }; # Optional - I don't know what you are doing with pricing, you + might want # to use 0 or undef to signal a request error (i.e., out-of-da +te data). next if !defined($pricing) or $pricing == 0; # Do something with $pricing; print "$pricing\n"; sleep 1; # Work simulation # return if QUIT CONDITION; } } my $thr = threads->create(\&price_thread); eval { main_loop() }; $RUNNING = 0; $thr->join();

    Good Day,
        Dean

      The following is duelafn's demonstration converted to using MCE::Hobo and MCE::Shared and runs in Perl lacking threads support. Locking is handled automatically behind the scene. I updated the code to run at each interval period.

      Update: Use monotonic clock.

      #!/usr/bin/perl -w use strict; use warnings; use MCE::Hobo; use MCE::Shared; use Time::HiRes qw(time sleep); # Use monotonic clock if available. use constant CLOCK_MONOTONIC => eval { Time::HiRes::clock_gettime( Time::HiRes::CLOCK_MONOTONIC() ); 1; }; sub _time { ( CLOCK_MONOTONIC ) ? Time::HiRes::clock_gettime( Time::HiRes::CLOCK_MONOTONIC() ) : Time::HiRes::time(); } my $DEBUG = 1; my $INTERVAL = $DEBUG ? 3.0 : 60.0; my $PRICING = MCE::Shared->scalar(0); my $RUNNING = MCE::Shared->scalar(1); sub price_thread { my $next_interval = _time() + $INTERVAL; while ($RUNNING->get()) { printf("# TIME %.3f\n", _time()) if $DEBUG; # Local variable to limit lock time. # Catch errors in eval my $pricing = eval { # Net::Curl to remote URL... request here. my $val = 1000 * rand(); # [from net::curl] # ... more? # last line will go to $pricing and $pricing gets undef on + exception $val }; $PRICING->set($pricing); # Pause between requests: my $time = _time(); if ($time > $next_interval) { # Wait till next interval if curl time is greater than $IN +TERVAL. $next_interval += $INTERVAL while $next_interval < $time; } sleep $next_interval - $time; $next_interval += $INTERVAL; } } sub main_loop { # Local variable to limit lock time and prevent changes mid-comput +ation my $pricing = 0; while ($RUNNING->get()) { $pricing = $PRICING->get(); # Optional - I don't know what you are doing with pricing, you + might want # to use 0 or undef to signal a request error (i.e., out-of-da +te data). next if !defined($pricing) or $pricing == 0; # Do something with $pricing; print "$pricing\n"; sleep 1; # Work simulation # return if QUIT CONDITION; } } my $thr = MCE::Hobo->create({ void_context => 1 }, \&price_thread); eval { main_loop() }; $RUNNING->set(0); $thr->join();
Re: making a loop script with a remote URL call faster
by tybalt89 (Monsignor) on Jan 15, 2022 at 04:41 UTC
    #!/usr/bin/perl use strict; # https://perlmonks.org/?node_id=11140461 use warnings; use Time::HiRes qw( sleep ); use IO::Select; my $interval = 1; # FIXME for testing my $pricing = ''; my $sel = IO::Select->new; my $time = time; my $loopcount = 0; while( 1 ) { if( $sel->count ) { for my $fh ( $sel->can_read(0) ) { if( not sysread $fh, $pricing, 16384, length $pricing ) { $sel->remove( $fh ); # FIXME process data print "\nbuffer : $pricing at loop count $loopcount\n"; $time = time + $interval; } } } elsif( time >= $time ) { if( open my $fh, '-|' ) { $sel->add( $fh ); $pricing = ''; } else { sleep 0.7; # FIXME fake net::curl delay time print "fake data\n"; exit; } } # hard loop print ++$loopcount, "\r"; }

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://11140461]
Front-paged by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (2)
As of 2024-04-20 04:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found