Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

Re^3: Producing a list of offsets efficiently

by tilly (Archbishop)
on May 29, 2005 at 18:32 UTC ( [id://461552]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Producing a list of offsets efficiently
in thread Producing a list of offsets efficiently

The resize and copies have an amortized constant cost per array element added. Put another way, pushing one element at a time averages out to a O(1) operation.
  • Comment on Re^3: Producing a list of offsets efficiently

Replies are listed 'Best First'.
Re^4: Producing a list of offsets efficiently
by BrowserUk (Patriarch) on May 29, 2005 at 21:38 UTC

    Thoughts?

    #! perl -slw use strict; use Benchmark::Timer; my $T= new Benchmark::Timer; for( 1 .. 1000 ) { my @small = (1) x 100; my @large = (1 ) x 130900; $T->start( "small: add 100 by 1" ); push @small, 1 for 1 .. 100; $T->stop( "small: add 100 by 1" ); $T->start( "large: add 100 by 1" ); push @large, 1 for 1 .. 100; $T->stop( "large: add 100 by 1" ); $T->start( "small: add 100 by 100" ); push @small, 1 .. 100; $T->stop( "small: add 100 by 100" ); $T->start( "large: add 100 by 100" ); push @large, 1 .. 100; $T->stop( "large: add 100 by 100" ); } $T->report; __END__ P:\test>461552,pl 1000 trials of small: add 100 by 1 ( 94.947ms total), 94us/trial 1000 trials of large: add 100 by 1 (112.226ms total), 112us/trial 1000 trials of small: add 100 by 100 ( 16.009ms total), 16us/trial 1000 trials of large: add 100 by 100 ( 15.977ms total), 15us/trial

    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      I fail to see how you think this refutes what I said.

      When I say "amortized average O(1)" I wasn't saying anything about the size of the constant. Merely that it is a constant.

      Certainly I wouldn't expect entering the push op to be free. Entering a for loop isn't free either. Doing both 100 times is more expensive than doing them once. Reallocation is also not free. Amortized constant is not free.

      I don't know whether you're reliably measuring the overhead due to reallocating. I guarantee you that the large array has at most one reallocation. (When you reallocate you reserve a buffer that is 1/5 the length of what you need for extra space. 1/5 of 130900 is large enough that we're NOT doing that twice.) From the figures I suspect that you are, and suspect that if you reversed pushing by 1 and pushing by 100 that you'd see an interesting change. (I can't run this since I don't have Benchmark::Timer, and I'm not going to bother installing on a machine that is being booted from Knoppix.)

      Also note that when I say "amortized constant" I mean that if you build a large array by pushing one element at a time, the sum of the costs of reallocating come out to (order of magnitude) a constant times the length of the array. However the distribution of reallocation costs is very uneven, you'll see a pattern where as time goes by the odds of a given push having a reallocation go down, but when it happens it costs a lot more. Therefore a test like yours is the wrong way to test my assertion - you want to compare how long it takes to build up arrays of different lengths and see if the times follow a linear pattern.

        I tried to phrase my reply to have as little controversy interpretable as possible. I really wanted to hear your thoughts and wisdom.

        I wasn't attempting to refute what you said, only show that O(1) doesn't tell the whole story. "amortized average O(1)" probably does, but only if you disregard the localised impact of reallocations which is what I was looking to avoid. The example I am working with is a 25 MB file producing 400,000 offsets. Building 400,000 element array with individual pushes is not insignificant, and I'm hoping to support larger. My current thinking is that building a smaller intermediate array amd then adding it to the main array, or building an array of arrays would avoid the larger reallocation/copies.

        The latter may also provide some additional benefits. By building an AoA where the each element of the top level array is an array of (say) 1000 offsets from the base rather than absolute positions, when the string is modifed by insertion or deletions, adjusting the offsets within one sub array and the absolute positions the base addresses of the others, would be quicker than adjusting all the absolute positions. It would also require less ram to store offsets rather than absolute positions.

        To extract the greatest benefit from this, knowing at what points the reallocations will occur and chosing the size of the nested arrays to avoid stepping over the next boundary is important. I thought that the reallocations might go in powers of two--hence my choice of 130900 + 200 stepping over the 2**17 boudary--but this does not appear to be the case.

        Could you explain the "1/5 th" thing for me in a little more detail? Or point me at the appropraite place to read about it?


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://461552]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others lurking in the Monastery: (3)
As of 2024-04-25 12:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found