in reply to declaring lexical variables in shortest scope: performance?
Don't diddle code to make it faster -- find a better algorithm
-- The Elements of Programming Style
Don’t Optimize Code -- Benchmark It
-- from Ten Essential Development Practices by Damian Conway
It's important to be realistic: most people don't care about program performance most of the time
-- The Computer Language Benchmarks Game
The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming
-- Donald Knuth
Don’t pessimize prematurely. All other things being equal, notably code complexity and readability, certain efficient design patterns and coding idioms should just flow naturally from your fingertips and are no harder to write than the pessimized alternatives. This is not premature optimization; it is avoiding gratuitous pessimization.
-- Andrei Alexandrescu and Herb Sutter
Rule 1: Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you have proven that's where the bottleneck is.
Rule 2: Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.
Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.)
Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.
Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
Note: Pike's rules 1 and 2 restate Tony Hoare's "Premature optimization is the root of all evil". Ken Thompson rephrased Pike's rules 3 and 4 as "When in doubt, use brute force". Rules 3 and 4 are instances of KISS. Rule 5 was stated by Fred Brooks in The Mythical Man-Month and is often shortened to "write stupid code that uses smart objects" (see also data structures vs code).
-- Rob Pike
Without good design, good algorithms, and complete understanding of the program's operation, your carefully optimized code will amount to one of mankind's least fruitful creations -- a fast slow program.
A couple of related general guidelines from On Coding Standards and Code Reviews:
- Correctness, simplicity and clarity come first. Avoid unnecessary cleverness. If you must rely on cleverness, encapsulate and comment it.
- Don't optimize prematurely. Benchmark before you optimize. Comment why you are optimizing.
On Interfaces and APIs cautions that library interfaces are very difficult to change once they become widely used - a fundamentally inefficient interface cannot be easily fixed later by optimizing. So it is not "premature optimization" to consider efficiency when designing public library interfaces.
See Also
- Data Structures vs Code : Some quotes from famous programmers
- The 10**21 Problem (Part I) : complex problem where the running time was reduced from 50 million years to one year via a long series of optimizations
- The 10**21 Problem (Part 2)
- The 10**21 Problem (Part 3)
- The 10**21 Problem (Part 4)
- Re^2: More Betterer Game of Life : reduced running time from 1635 seconds to 17 seconds ... where tweaking the code, via a long series of micro-optimizations, reduced the running time from 1635 secs to 450 secs (3.6 times faster), while finding a better algorithm reduced it from 450 secs to 17 secs (26.5 times faster)
- Re^2: What's Perl good at or better than Python (Game of Life, LLiL, Rosetta and Performance References) : The C++ version of the simple GoL algorithm was 450/36 = 12.5 times faster than the Perl version; for the complex algorithm C++ was 17/0.08 = 212.5 times faster; C++ memory use was 2.8 times lower than Perl for the simple algorithm, 10.1 times lower for the complex one
- Re^3: Advice on learning Perl and graphics (Static vs Dynamic Typing and JIT) : static vs dynamic typing (static typing usually results in compiled code that executes faster); Java vs C++
These experiences convinced me of don't assume measure and especially find a better algorithm!
Perl Performance References
- perlperf : Perl Performance and Optimization Techniques
- perlperf - PROFILING TOOLS
- perlfaq: How do I profile my Perl programs?
- Devel::NYTProf : Fantastic Perl code profiler, can be used as a line profiler or block/subroutine profiler or both
- Performance Profiling with Devel::NYTProf : talk by Tim Bunce (youtube)
- Re: Windows Perl with sqlite3 - kcott warns of many failures when installing Devel::NYTProf on MSWin
- Benchmark : Perl core benchmarking module
- Memoize : Make functions faster by trading space for time
- MCE by marioroy - Many-Core Engine for Perl providing parallel processing capabilities
High Performance and Parallel Computing References
- List of performance analysis tools (wikipedia)
- Parallel computing (wikipedia)
- Parallel programming model (wikipedia)
- Concurrency (computer science) (wikipedia)
- OpenMP (wikipedia)
- OpenMP Little Book
- Threading Building Blocks (wikipedia) - aka Intel TBB
- List of C++ multi-threading libraries (wikipedia)
- VTune - Intel VTune, part of the Intel oneAPI Base Toolkit
- Intel oneAPI - a unified API used across different computing accelerator (coprocessor) architectures, including GPUs, AI accelerators and field-programmable gate arrays
- Bank Queuing Model (Chunking) by marioroy (MCE chunking "attribute" came in a flash while standing in line at the bank)
- Bank Queuing Model (Chunk ID) (The chunk_id value increments by one ... data is read serially one chunk at a time)
- MCE Sandbox 2023-08 by marioroy - deliberately designed for necroposts (added in replies as he learns new things)
- [OT] The Long List is Long resurrected by marioroy (2024) (mentions Taichi Lang, a DSL embedded in Python)
- High Performance Parallel Runtimes: Design and Implementation (Book)
- Processing a File with OpenMP by Jim Cownie (author of High Performance Parallel Runtimes book) -- see comments there from marioroy :)
- Grep RE2 C++ OpenMP demonstration by marioroy using RE2
- google RE2 Regex Library - uses Abseil, a lot faster than std::regex
- parallel runtimes (github)
- High Performance and Parallel Computing (Illinois Tech)
Extra Performance/Optimization References Added Later
Bitwise Operations:
- Most Significant Set Bit by coldr3ality (2024) - good replies from hippo, hv, davido ...
- Re: Most Significant Set Bit (Bit Twiddling References)
Benchmark:
- Fastest way to lookup a point in a set : example using Benchmark to compare different ways of looking up a point in a set
- Re: Confused by RegEx count by choroba (2024) : example Benchmarking transliteration tr/// vs substitution s///
- Re^3: looping efficiency (Benchmark Example) : example using Benchmark (replies mention Devel::NYTProf and perl -MO=Terse using B::Terse)
Other:
- Re^4: Fastest way to lookup a point in a set : BrowserUk use case where Perl hashes were an order of magnitude faster than an SQLite memory-based database
- Tie::Hash::DBD - CPAN module to tie a plain hash to a database table, by Tux
- Re: need help with judy array searching (Judy Array References) : references on using Judy arrays in Perl
- Memory efficient way to deal with really large arrays? by sectokia (2020, similar to my Fastest way to lookup a point in a set)
- Small Hash a Gateway to Large Hash? by lsherwood (2014, is building a small hash based on results of accessing a large hash likely to help speed up my script?)
- write hash to disk after memory limit by hailholyghost (2015, 8 GB physical memory, script uses 17 GB, wants to free memory to OS to minimize swapping)
- Big cache by Liebranca (2022, seeks general advice on speeding up local databases on local computer ... to avoid reading a million small files on startup)
- Re: Data structures in Perl. A C programmer's perspective. (vector vs linked list performance) (C++ vector vs linked list - the penalty for cache misses tends to dominate CPU usage time with modern CPUs)
- Re^3: [OT:] Is this Curriculum right? (more on C++ vector vs linked list)
- How to do popcount (aka Hamming weight) in Perl (popcount References) by me (2017)
- Re^6: Hash versus chain of elsifs by me (2021)
- Re: Perl 5 Optimizing Compiler by chromatic (2012)
- making a loop script with a remote URL call faster by brandonm78 (2022) (plus necropost reply by marioroy)
- Trading compile time for faster runtime? by melez (2022)
- Re: Trading compile time for faster runtime? by dave_the_m (2022)
- Optimization tips by sroux (2022) (asks for optimization tips to speed up some awful old code without having to rewrite it)
- Need to speed up many regex substitutions and somehow make them a here-doc list by xnous (2022) (performance of bulk regex substitutions in bash/sed vs perl)
- Perl regex speed by malaigo (2022) (uni prof asks: why isn't latest Apple Silicon implementation faster than Intel one on Perl regex? (assuming it is arm64 and not emulated x86_64))
- Windows Perl with sqlite3 by miner7777 (2023) (root cause was a corrupted Database file)
- does a hash element as a loop parameter cause significant slowdown? by misterperl (2023) (ChatGPT makes a dubious performance suggestion to speed up for my $c (1..$n1))
- Optimizing with Caching vs. Parallelizing (MCE::Map) by nickt (2020) (with MCE-related replies from marioroy)
- MCE Sandbox 2023-08 by marioroy (2023) (writing fast code using: Perl MCE + Inline::C, Math::Prime::Util, C/C++ libprimesieve, Codon)
- Perl slower than java by Christian888 (2010)
- Does "preallocating hash improve performance"? Or "using a hash slice"? by vr (2017)
- HPC Computing Question by doubleqq (2014)
- Anyway to Have Strong-Like Typing by jmmitc06 (2014)
- what's faster than .= by xafwodahs (2003) - with responses from TimToady
From BrowserUk (2012-2015):
- Bidirectional lookup algorithm? (Updated: further info.)
- [OT] The statistics of hashing.
- Heap structure for lookup?
- Re^6: Heap structure for lookup?
- How to find out, why my perl code is slow. by anonymonk (2018)
- Does perl read the entire pl file into memory? by harangzsolt33 (2019)
Two spookily similar nodes posted late 2021 (both requesting XS C code, both by new monks who won't show us their code):
- Can someone please write a *working* JSON module by cnd
- Anyone with XS experience willing to create a high performance data type for Perl? by beautyfulman (unfortunately this node was later vandalised by beautyfulman who left in a huff)
Some old classics:
- How can I speed up my Perl program? (SO 2008)
- Nicholas Clark classic talk: When perl is not quite fast enough (has the PDF of his talk slides vanished from the web?)
- Compile perl for performance by learnedbyerror (2018) - replies could not find Nicholas Clark talk slides either :-(
- Optimize Perl by anon (2004)
- Optimizing existing Perl code (in practise) by JaWi (2002)
- STOP Trading Memory for Speed by PetaMem (2002)
- Wasting time thinking about wasted time by brian_d_foy (2004)
Other:
- can we call c++ in perl to process PDL arrays? by toothedsword (2019)
- Performant Path Iteration by learnedbyerror (2018)
On CPAN:
Some external references:
- Why not Translate Perl to C? by MJD
- booking.com blog about making perlguts faster by Eric Herman
Mathematical:
- Computing pi to multiple precision by ambrus (2012) - see also much later reply by marioroy (2022)
Memory:
Sorting:
Multi-threading:
- Anything by marioroy
- Rosetta Code: Long List is Long (long thread: check especially the many replies from marioroy)
- Re: Rosetta Test: Long List is Long - Abseil
- Re: Rosetta Code: Long List is Long - JudySL code by marioroy (2023) - JudySL array implementation
- Solving the Long List is Long challenge, finally? by marioroy (2023) - Perl versions in replies use Sort::Packed, Tie::Hash::DBD, DB_File, IPC::MMA, Crypt::xxHash, Tokyo Cabinet, Kyoto Cabinet, Tkrzw sharding, Tkrzw llil4tkh, Tkrzw llil4tkh2
Compiler switches/flags:
- Re: Windows precompiled binaries or DIY compile by NERDVANA (2023) - gcc compiler flags for building fast Perl, e.g. CFLAGS="-O2 -march=native -mtune=native"
I/O:
PDL and Array Processing References
RPerl References
- About using rperl
- How to compile RPerl successfully?
- Perl 5 Optimizing Compiler
- Trading compile time for faster runtime?
See Also
Updated: Added Donald Knuth premature optimization quote and Alexandrescu/Sutter premature pessimization quotes and Rob Pike quotes. Mentioned efficiency of interfaces. Added more references.