Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?

Benchmarking Your Code

by turnstep (Parson)
on Apr 24, 2000 at 22:05 UTC ( [id://8745]=perltutorial: print w/replies, xml ) Need Help??

Benchmarking your code

What is benchmarking?

Benchmarking is a way of measuring something - in this case, how fast your code runs. This is particularly useful when you are trying to compare two or more ways to do the same thing, and you want to see which one is faster. You are really measuring which way is more efficient for perl to do - the less work it takes perl, the faster it is.

Why benchmark?

Small differences add up. A slight change in a small section of your code may make a big difference, especially if that code has to perform a lot of work. For example, a different ways of sorting a collection of words may not matter much for 100 words, but if you have 100,000 words, the small differences start to matter. Also, it can be matter of style to make your code as efficient as possible. It is a good goal to aim for.

How to benchmark your code

Benchmarking is usually done with the (surprise) Benchmark module. This module is very standard, and is very likely already installed on your system. If not, grab it off of CPAN.

Benchmarking is not as simple as subtracting the results of one time call from another one - these are only accurate to one second, and are not a very good measure. The benchmark module uses the time function as well as the times function, which allows for a much finer measurement of milliseconds.

Here is a quick overview of the Benchmark module:

To use it, just type:

use Benchmark;
at the top of your code. Benchmark has three simple routines for you to use: timeit, timethis, and timethese. Each one needs to know what code to run (sent as the string CODE in the examples below), as well as how many times to loop through the code ($count in the examples below).

For a simple measurement of one piece of code, just use timeit. You will also need the timestr routine, which changes the times that Benchmark uses to a more useful string:

$x = timeit($count, 'CODE'); ## CODE is run $count times ## $x becomes a Benchmark object which contains the result print "Result from $count loops: "; print timestr($x), "\n";

This can be a bit awkward, so Benchmark also has the timethis routine, which does the same thing as timeit, but also outputs the results. No timestr is needed this time:

$x = timethis($count, 'CODE'); ## or even just: timethis($count, 'CODE');

The last routine is timethese, which is the most useful, as it allows you to compare 2 or more chunks of code at the same time. The syntax is as follows:

@x = timethese($count, { 'one','CODE1', 'two','CODE2' });

It returns an array, but this is often unused. Use of the 'alternative comma' is also recommended, to make it easier to read:

timethese($count, { 'one' => 'CODE1', 'two' => 'CODE2', 'pizza' => 'CODE_X', ## etc.... });

It will run each code in the list, and report the result with the label before it. See the example below for some sample output.

A final routine to know is timediff which simply computes the difference between two Benchmark objects:

$x = timeit($count, 'CODE1'); $y = timeit($count, 'CODE2'); $mydiff = timediff($x, $y);

The benchmark module has a few other features, but these are beyond this tutorial - if interested, check it out yourself: has embedded POD inside it.

Benchmark Example

For a simple example of benchmarking, let's compare two different ways of sorting a list of words. One way will use the cmp operator, and one will use the <=> operator. Which one is faster for a simple list of words? We will us benchmarking to find out. For this example, we will create a random list of 1000 words with 6 letters each. Then we'll sort the list both ways and compare the results. Here is our complete code:

#!/usr/bin/perl use Benchmark; $count = shift || die "Need a count!\n"; ## Create a dummy list of 1000 random 6 letter words srand(); for (1..1000) { push(@words, chr(rand(26)+65) . chr(rand(26)+65) . chr(rand(26)+65) . chr(rand(26)+65) . chr(rand(26)+65) . chr(rand(26)+65)); } ## Method number one - a numeric sort sub One { @temp = sort {$a <=> $b} @words; } ## Method number two - an alphabetic sort sub Two { @temp = sort {$a cmp $b} @words; } ## We'll test each one, with simple labels timethese ( $count, {'Method One' => '&One', 'Method Two' => '&Two'} ); exit;

Notice that we store the results of our sort into an unused variable, @temp, so that @words itself is never sorted, as we need to use it again.

Here is the result of running it with a count of 10:

Benchmark: timing 10 iterations of Sort One, Sort Two... Sort One: 0 secs ( 0.33 usr 0.00 sys = 0.33 cpu) (warning: too few iterations for a reliable count) Sort Two: 1 secs ( 0.48 usr 0.01 sys = 0.49 cpu)

The results tell us four numbers for each code. Notice that it also gave us a warning for the first one. This warning is only a guideline, but it is usually right - we need a higher count. Try to get the number of cpu seconds (the last number) to be at least 3 seconds or more for one of the measurements. In our example, let's try boosting the count to 150:

Benchmark: timing 150 iterations of Sort One, Sort Two... Sort One: 5 secs ( 4.89 usr 0.01 sys = 4.90 cpu) Sort Two: 8 secs ( 7.12 usr 0.01 sys = 7.13 cpu)

Much better! No warning, and some real times are generated. Let's look at each of the numbers. The first number is the elapsed time, or how many seconds the loops took by using the time function. This is not a very reliable number: as you can see, with 10 loops, one of the results was 0 seconds. Generally, you can ignore this one, except as a rough guideline. In particular, a reading of '0' or '1' is almost useless. Aim for at least an elapsed time of 5 seconds or more for the best results.

The next three numbers come from the function times, which returns much more detailed information. The first two numbers return the user and system time. Don't be surprised if the system time is often "0" or very low. These are not as important as the final value, the cpu time, which is what we are really interested in. This is the one you should use to make your comparisons. Try to get at least one of the numbers over 5 seconds - the higher the number, the more accurate your comparison will be. In this case, we can see that Method One, the <=> operator, is faster at 4.90 cpu seconds compared to the 7.13 seconds that cmp took.

Tips and Tricks

Here are some things to think about and watch out for:

  • Make sure your code works before you start looping it! This is often overlooked when you are in a hurry. Test it once with some results and then benchmark it.
  • Add the count to the command line. Something as simple as:
    $count = shift || die "Need a count!\n";
    keeps you from editing the code every time to try a new count value.
  • Beware of changes in your repeated loop. Don't change any variables that are used the next time the loop is run. In other words, make sure that when you benchmark a chunk of code, the first loop does exactly the same thing as the last.
  • Move everything out of the loop that you can. You want to only test what is important. Move things like opening file handles and initializing values out of the loop. You don't want to reopen your file 5000 times! Do it once, outside of the loop.
  • Minimize the test. Similar to the above, try to compare as few things as possible. A subroutine that slices, sorts, replaces, and does ten other things will not tell you how fast each of them is, only how they work together. Change one thing at a time when comparing two chunks of code.
  • Put the benchmark code at the top of your code. It's temporary, easy to find, and easy to remove once you are done testing.
  • Use subroutines to test your code. It keeps the Benchmark routines uncluttered, and it is easy to make changes to your subroutines. If the code is really simple, of course, you can just put the whole code into the argument for the Benchmark routine
  • Start with a low count, and work your way up. It is often hard to tell exactly how long the code will take - so err on the low side. Start with 10, and then move up to 100, then a 1000, then perhaps 5000. You'll get a feel for it as you go. Aim for at least 5 seconds of elapsed time, and at least 3 seconds of cpu time. Complicated code and slow machines may take over a minute to run 100 loops, while very simple code and very fast machines may require counts in the millions!
  • Swap the order of your tests around, to make sure that one is not affecting the other inadvertently. The results should be the same.

Replies are listed 'Best First'.
A few more tips
by gryng (Hermit) on Jul 25, 2000 at 20:58 UTC
    Oh I like this tutorial. Here are two more things that can be useful:

    First, turnstep talks alot about trying to get the number of iterations to equal about 5 seconds of cpu time. I agree with this value, however benchmark will do this tuning for you! Simply provide -5 as the number of iterations and benchmark will iterate the loop for at least 5 cpu seconds. For your final benchmark you may want to bring that number up to -10 or -20, just to make sure.

    Also, differences of less than 5% should probably be ignored -- not because they are not happening, but rather because they may disappear on a different computer setup. And more importantly, you probably don't need to trade a 5% speed up for other considerations, such as code readability.

    Here is sample output using a negative value:

    Benchmark: running Method One, Method Two, each for at least 10 CPU se +conds... Method One: 11 wallclock secs (10.10 usr + 0.01 sys = 10.11 CPU) @ 28 +1.21/s (n=2843) Method Two: 11 wallclock secs (10.51 usr + 0.02 sys = 10.53 CPU) @ 78 +.82/s (n=830)

    Now, note that with this sort of run, you don't want to look at the CPU seconds used, rather you want to look at the last two numbers, the rate and the iteration count. These will tell you which is faster.

    Thanks again turnstep seeya,

RE: Benchmarking Your Code
by btrott (Parson) on Apr 24, 2000 at 22:36 UTC
    This is very nice. I thought I'd add a few tips of my own:
    • Caching is the enemy of benchmarking. Make sure that the code you're benchmarking doesn't do any caching of the results. This is particularly important if you're using code that other people have written (e.g., modules from CPAN) as part of the code that you're benchmarking.
    • Benchmarked code uses package global variables. This is extremely important to note, because if you use lexicals, your benchmark results will mean nothing, because you'll most likely be using undefined values, or values that you're not trying to test. So this goes along with turnstep's recommendation to make sure that your code works before you benchmark it: make sure that it works *while* you're benchmarking it. Most of the time, I use a loop count of 1 the first time I run a benchmark, then I print out the values within the code reference (or string) to make sure I've got everything right.
    • Don't intermix eval'd strings with code references, because, according to the Benchmark manpage, code references will show slower execution times than the equivalent eval'd strings.
Re: Benchmarking Your Code
by bikeNomad (Priest) on Jul 18, 2001 at 19:04 UTC
    You should also mention cmpthese which will do the math to compare two or more strategies (why should you reach for your calculator when you have a computer)? I always use it instead of timethese because it produces a superset of the results.
Re: Benchmarking Your Code
by Sartak (Hermit) on Jun 10, 2006 at 06:43 UTC

    Interestingly, you used the <=> operator to compare words. When a word ([a-zA-Z]+) is numified, it becomes a numeric 0. So the results you're seeing are probably from {0 <=> 0} being far faster than {'mountain' cmp 'mountable'}.

    Perhaps a better test would be the use of <=> and cmp on numbers?

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perltutorial [id://8745]
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (3)
As of 2024-05-27 16:40 GMT
Find Nodes?
    Voting Booth?

    No recent polls found