Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Re: Forking Benchmarks?

by Rhys (Pilgrim)
on Sep 04, 2004 at 10:37 UTC ( [id://388491]=note: print w/replies, xml ) Need Help??


in reply to Forking Benchmarks?

I think it's worth mentioning in your POD that it's wisest to test code both with and without forking enabled, if your platform supports it. Enabling forking is a better test of each piece of code, but disabling forking is a better test of how they interact in a real script.

One might even go so far as to say that differences between the two sets of results should throw up yellow flags. (Maybe expected, maybe not, but certainly the place where algorithm analysis should be focused.) The average scripter may need you to include this short summary of your original point. ;-)

BTW, I love the title for this thread. Almost, but not quite, vulgar. :-D

Replies are listed 'Best First'.
Re^2: Forking Benchmarks?
by Aristotle (Chancellor) on Sep 04, 2004 at 15:50 UTC

    I don't see either point. You don't want your benchmarks to interact, and normally have to make sure they don't. Forking saves you that trouble. That also means yellow flags should be raised only if you wanted to use the non-forking benchmark as the baseline — but why? Sure, if you find differences and didn't expect any, it's worth investigating the source of the interaction — if it's not in your own benchmarked code, modules you pull in might have an issue you weren't aware of. But beyond that, provided with a means to entirely isolate benchmarks, I just don't see any reason to go to the trouble to make them "clean".

    Makeshifts last the longest.

      In the example given, there are two possible scenarios.

      1) The intention is to test how each version of a piece of code handles a specific problem. In this case, you're exactly right.

      2) The intention is to test how each piece of code is performing in a larger prog. In this case, both the performance of the individual segments and the interactions among those segments in the real-world case are valuable, so you want both the 'isolated' and 'non-isolated' cases.

      In any event, I like the module. In case 1, it allows for testing of several very similar segments of code non-interactively at once, regardless of whether the coder knows they would otherwise be interactive. (Another goot habit, like 'use strict'.)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://388491]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others avoiding work at the Monastery: (None)
    As of 2024-04-25 04:00 GMT
    Sections?
    Information?
    Find Nodes?
    Leftovers?
      Voting Booth?

      No recent polls found