Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Re^5: Producing a list of offsets efficiently

by tilly (Archbishop)
on May 30, 2005 at 05:20 UTC ( [id://461646]=note: print w/replies, xml ) Need Help??


in reply to Re^4: Producing a list of offsets efficiently
in thread Producing a list of offsets efficiently

I fail to see how you think this refutes what I said.

When I say "amortized average O(1)" I wasn't saying anything about the size of the constant. Merely that it is a constant.

Certainly I wouldn't expect entering the push op to be free. Entering a for loop isn't free either. Doing both 100 times is more expensive than doing them once. Reallocation is also not free. Amortized constant is not free.

I don't know whether you're reliably measuring the overhead due to reallocating. I guarantee you that the large array has at most one reallocation. (When you reallocate you reserve a buffer that is 1/5 the length of what you need for extra space. 1/5 of 130900 is large enough that we're NOT doing that twice.) From the figures I suspect that you are, and suspect that if you reversed pushing by 1 and pushing by 100 that you'd see an interesting change. (I can't run this since I don't have Benchmark::Timer, and I'm not going to bother installing on a machine that is being booted from Knoppix.)

Also note that when I say "amortized constant" I mean that if you build a large array by pushing one element at a time, the sum of the costs of reallocating come out to (order of magnitude) a constant times the length of the array. However the distribution of reallocation costs is very uneven, you'll see a pattern where as time goes by the odds of a given push having a reallocation go down, but when it happens it costs a lot more. Therefore a test like yours is the wrong way to test my assertion - you want to compare how long it takes to build up arrays of different lengths and see if the times follow a linear pattern.

  • Comment on Re^5: Producing a list of offsets efficiently

Replies are listed 'Best First'.
Re^6: Producing a list of offsets efficiently
by BrowserUk (Patriarch) on May 30, 2005 at 06:45 UTC

    I tried to phrase my reply to have as little controversy interpretable as possible. I really wanted to hear your thoughts and wisdom.

    I wasn't attempting to refute what you said, only show that O(1) doesn't tell the whole story. "amortized average O(1)" probably does, but only if you disregard the localised impact of reallocations which is what I was looking to avoid. The example I am working with is a 25 MB file producing 400,000 offsets. Building 400,000 element array with individual pushes is not insignificant, and I'm hoping to support larger. My current thinking is that building a smaller intermediate array amd then adding it to the main array, or building an array of arrays would avoid the larger reallocation/copies.

    The latter may also provide some additional benefits. By building an AoA where the each element of the top level array is an array of (say) 1000 offsets from the base rather than absolute positions, when the string is modifed by insertion or deletions, adjusting the offsets within one sub array and the absolute positions the base addresses of the others, would be quicker than adjusting all the absolute positions. It would also require less ram to store offsets rather than absolute positions.

    To extract the greatest benefit from this, knowing at what points the reallocations will occur and chosing the size of the nested arrays to avoid stepping over the next boundary is important. I thought that the reallocations might go in powers of two--hence my choice of 130900 + 200 stepping over the 2**17 boudary--but this does not appear to be the case.

    Could you explain the "1/5 th" thing for me in a little more detail? Or point me at the appropraite place to read about it?


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      To see the source of 1/5 thing, look at the source to av.c. The newmax variable tracks Perl's buffering decisions quite precisely. That doesn't explain the consequences though. To understand the consequences, consider what happens just after you've had to reallocate. At that point all elements were reallocated once, 5/6 of them twice, 25/36 of them three times and so on. The total number of reallocations is therefore (1 + 5/6 + (5/6)**2 + ...). Multiply and divide by (1-5/6) then simplify to see that on average, each element has been reallocated 6 times. (Right before a reallocation that average drops to 5.)

      The point about amortized constant is critical. The convergence to that behaviour is quite fast. For instance if you break the 400,000 array into building 400 arrays of 1000 elements that you then push together, you'll find the same amortized constant coming into play both in building 1000 element arrays and in building the 400,000 element array. So instead of eliminating that overhead, you're now paying twice the overhead! (By separating arrays you've kept Perl from deciding to allocate big enough chunks.)

      If you want to reduce that overhead, you can change the constant fairly easily by pre-extending the array at specified intervals. Assign by index and keep track of the size of the array. Any time you're about to pass the maximum size of the array, assign undef to a spot that is twice as far. That'll change the constant from 5 to 2, and the average reallocations from ranging from 5-6 down to ranging from 2-3. (At the cost of a constant amount of logic per iteration. I don't know which constant will be bigger - this might be a loss.)

      Overall I'm going to guess, though, that with 400,000 offsets your biggest source of overhead is that you have 400,000 Perl scalars. If performance is a problem it might save effort to have the data packed into a long string. You might even use C to build said string.

        Okay. Thanks for that.

        assign undef to a spot that is twice as far ....

        Is this better/prefereable to assigning to $#array?

        For instance if you break the 400,000 array into building 400 arrays of 1000 elements that you then push together, you'll find the same amortized constant coming into play both in building 1000 element arrays and in building the 400,000 element array. So instead of eliminating that overhead, you're now paying twice the overhead!

        That's where the AoA idea came up. Effectively making my index a two level affair and reducing the reallocations by only building (and probably preallocating) 400 arrays of 1000 elements rather than 1 of 400,000. The zeroth element of each of the 400 arrays would be an absolute value, but the other 999 would be relative to that base. Hence adjustments required to insert or delete affect (upto) 999 elements in the subarray affected + (upto) 400 absolute base values rather than 400,000 absolute values each time.

        If performance is a problem it might save effort to have the data packed into a long string.

        And that the final piece in the puzzle. packing the relative values into strings reduces the number of scalers by 3 orders. For most purposes, offsets (line lengths) can be packed into 8 bits reducing memory consuption further. By wrapping an exception eval around the packing and looking for "Character in 'C' format wrapped in pack" errors, I can detect if a line goes over 255 chars with the penalty of re-indexing to use 'n' if it happens. Ditto 'n' _> 'N'.

        Moving to Inline:C or XS is an option if needed.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://461646]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others scrutinizing the Monastery: (4)
As of 2024-04-20 15:08 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found