![]() |
|
XP is just a number | |
PerlMonks |
Re: Trading compile time for faster runtime?by cavac (Vicar) |
on Apr 22, 2022 at 08:51 UTC ( #11143194=note: print w/replies, xml ) | Need Help?? |
I don't know anything about your exact requirements and what your scripts do. But i have a feeling you are doing a lot of data munching. I assume you tried putting your data in a database like PostgreSQL and implementing the time critical parts in SQL? Example: For a year now i had some performance problems on my DNS server written in Perl. It had to do with white/blacklisting of domains. Basically, it had to do about a million string matches for every request, plus a few thousand regexp matches. I moved the whole matching algorithm into PostgreSQL. It isn't all that optimized yet, but it runs in a fraction of the time:
Perl is quite good in general. But when it comes to handling large amounts of data, no "normal" scripting language comes even close to a modern SQL database engine like PostgreSQL. People on those projects spent the last few decades optimizing every last tenth of a percent of performance. Yeah, there is probably a way to optimize that function using the WITH() clause and/or having some special type of INDEX on the table that's somehow optimized to handle regular expressions or something.
perl -e 'use Crypt::Digest::SHA256 qw[sha256_hex]; print substr(sha256_hex("the Answer To Life, The Universe And Everything"), 6, 2), "\n";'
In Section
Meditations
|
|