No such thing as a small change | |
PerlMonks |
Re: Trading compile time for faster runtime?by swl (Parson) |
on Apr 21, 2022 at 07:22 UTC ( [id://11143151]=note: print w/replies, xml ) | Need Help?? |
Others have already noted profiling as the way forward. Devel::NYTProf is the go-to for that. The best optimisation is to use an algorithm that avoids doing much of the work in the first place, but sometimes you just need faster implementations. If you are on a recent-ish perl then the refaliasing feature can be used to avoid repeated dereferencing of array items inside loops (see also Data::Alias). It is only really worthwhile when there are very large numbers of derefs that can be avoided. It is also still experimental if that is a concern. Data::Recursive has some fast methods to merge data structures (the difference is a few percent so it is more useful for larger data sets or frequent mergers). It depends on some complex modules so installation does not always go smoothly, and hence it is not safe to assume it is available on end-user machines. This means fallback code is needed, which leads to extra maintenance load. I have not tested these next few but they might be useful. Devel::GoFaster speeds up some common ops and is in the spirit of your question. There are also a few modules from PEVANS, the contents of which might make their way into future perl releases: Faster::Maths speeds up some mathematical processing, List::Keywords provides faster versions of some List::Util subs (but currently fails tests on Windows). Version 5.36 will also have the option to disable taint checking. This will apparently speed up a lot of processing. I haven't seen any benchmarking yet but the current development release includes it as an option: https://metacpan.org/release/SHAY/perl-5.35.11/view/pod/perldelta.pod#Configuration-and-Compilation.
In Section
Meditations
|
|