Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Re^2: Faster creation of Arrays in XS?

by wollmers (Scribe)
on Jun 22, 2015 at 09:36 UTC ( [id://1131449]=note: print w/replies, xml ) Need Help??


in reply to Re: Faster creation of Arrays in XS?
in thread Faster creation of Arrays in XS?

BrowserUK: Unpack in your above code also creates a list, which seems to be the same bottleneck. Benchmarks are in the same range.

Also tried List::Util::pairs(), which benchmarks a little faster. Maybe I can copy and inline this part of C code.

What I maybe will do now is providing different formats, AoA [2][L], AoA[L][2] and 2 bitstrings (match-index). Bitstrings should be very fast, but not so convenient to process. Perl5 does not have the functions lsb (index of lowest significant bit) and msb, which Perl6 has.

Replies are listed 'Best First'.
Re^3: Faster creation of Arrays in XS?
by BrowserUk (Patriarch) on Jun 22, 2015 at 11:02 UTC
    Unpack in your above code also creates a list,

    Only a list of 2 elements? That why I showed using substr to nibble the packed array in pairs.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
    I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!

      Yes, 2 elements 50 times.

      #!perl use 5.006; # file: pack.t use strict; use warnings; use Benchmark qw(:all) ; use Data::Dumper; use List::Util qw(pairs); my @diffs = map { $_,$_; } (0..49); #print '@diffs: ',Dumper(\@diffs),"\n"; my $packed0 = pack('V*',@diffs); my $a = []; #print 'len: ',length( $packed0),"\n"; my $packed = $packed0; while (length( $packed)) { push @$a,[unpack ('VV', substr( $packed, 0, 8, ''))]; } #print Dumper($a); timethese( 50_000, { 'unpack while' => sub { my $packed = $packed0; while (length( $packed)) { my ($x,$y)= unpack ('VV', substr( $packed, 0, 8, '')); } }, 'unpack while single' => sub { my $packed = $packed0; while (length( $packed)) { my $x = unpack ('V', substr( $packed, 0, 4, '')); } }, 'unpack while push' => sub { my $packed = $packed0; $a = []; while (length( $packed)) { push @$a,[unpack ('VV', substr( $packed, 0, 8, ''))]; } }, 'unpack for' => sub { for ( my $i = 0;$i < length( $packed0)-1; $i += 8 ) { my ($x,$y)= unpack ('VV', substr( $packed0, $i, 8)); } }, 'unpack for push' => sub { $a = []; for ( my $i = 0;$i < length( $packed0)-1; $i += 8 ) { push @$a,[unpack ('VV', substr( $packed0, $i, 8))]; } }, }); ######## $ perl pack.t Benchmark: timing 50000 iterations of unpack for, unpack for push, unp +ack while, unpack while push, unpack while single... unpack for: 1 wallclock secs ( 1.01 usr + 0.00 sys = 1.01 CPU) @ 49 +504.95/s (n=50000) unpack for push: 2 wallclock secs ( 1.84 usr + 0.00 sys = 1.84 CPU) + @ 27173.91/s (n=50000) unpack while: 1 wallclock secs ( 0.84 usr + 0.00 sys = 0.84 CPU) @ +59523.81/s (n=50000) unpack while push: 2 wallclock secs ( 1.75 usr + 0.00 sys = 1.75 CP +U) @ 28571.43/s (n=50000) unpack while single: 1 wallclock secs ( 1.28 usr + 0.00 sys = 1.28 +CPU) @ 39062.50/s (n=50000)

      With bitmaps it would be an array of 2 scalars (2 x 64-bit IVs, 53 bits used in the original test case).

        Yes, 2 elements 50 times.

        Yes, but it is still the fastest of all the methods you've benchmarked:

        unpack for: 1 wallclock secs ( 1.01 usr + 0.00 sys = 1.01 CPU) + @ 49504.95/s (n=50000) unpack for push: 2 wallclock secs ( 1.84 usr + 0.00 sys = 1.84 CPU) + @ 27173.91/s (n=50000) unpack while: 1 wallclock secs ( 0.84 usr + 0.00 sys = 0.84 CPU) + @ 59523.81/s (n=50000) <<<< This is fastest unpack while push: 2 wallclock secs ( 1.75 usr + 0.00 sys = 1.75 CP +U) @ 28571.43/s (n=50000) unpack while single: 1 wallclock secs ( 1.28 usr + 0.00 sys = 1.28 +CPU) @ 39062.50/s (n=50000)

        If you used cmpthese() instead of timethese(), it sorts the tests for you and the best result becomes obvious.

        But the things you've benchmarked against make no sense. Why would you unpack two at a time only to push to an array? You might just have well cut out the middle man and built the array to start with.

        Or if you really insist on building an array from the packed string, let perl do it for you:  my @array = unpack 'V*', $packed;

        What you should be comparing is the time taken to construct and return an AoAs and then access the elements (use) that AoAs; versus the time taken to construct and return the packed string and then access the elements (use) that packed string.

        Throwing the cost of building another array into the timing makes no sense at all.

        With bitmaps it would be an array of 2 scalars (2 x 64-bit IVs, 53 bits used in the original test case).

        I seriously doubt it is cheaper to pack the information into a bitstring at the C level, and then unpack it again at the Perl level than to build a packed array of integers at the C level and then unpack them at the perl level.

        I know from experience that accessing individual bits in Perl -- whether using per-bit calls to vec; or compound boolean expressions: ( $bits & (1 << $pos) ) >> $pos -- is far slower than unpacking integers from a packed array.

        And it would take a full end-to-end (perl->C->perl) benchmark of both methods to convince me otherwise.

        But, its your code. Good luck.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
        I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!

        This is what I mean by an end-to-end benchmark:

        #! perl -slw use strict; use Config; use Inline C => Config => BUILD_NOISY => 1; use Inline C => <<'END_C', NAME => 'diffBench', CLEAN_AFTER_BUILD =>0 +; SV *diffAoA( U32 n ) { AV *av = newAV(); U32 i; for( i = 0; i < n/2; ++i ) { AV *av2 = newAV(); av_push( av2, newSViv( i*2 ) ); av_push( av2, newSViv( i*2+1 ) ); av_push( av, (SV*)av2 ); } return newRV_noinc( (SV*)av ); } SV *diffPacked( U32 n ) { U32 *diffs = malloc( sizeof( U32 ) * n ); SV *packed; U32 i; for( i = 0; i < n; ++i ) { diffs[ i ] = i; } packed = newSVpv( (char *)diffs, sizeof( U32 ) * n ); free( diffs ); return packed; } SV *diff2dString( U32 n ) { SV *diffs = newSVpv( "", 0 ); U32 i; for( i = 0; i < n/2; ++i ) { sv_catpvf( diffs, "%u:%u ", i*2, i*2+1 ); } return diffs; } void diffList( U32 n ) { inline_stack_vars; U32 i; inline_stack_reset; for( i = 0; i < n; ++i ) { inline_stack_push( sv_2mortal( newSViv( i ) ) ); } inline_stack_done; inline_stack_return( n ); return; } END_C use Data::Dump qw[ pp ]; use Benchmark qw[ cmpthese ]; our $N //= 10; cmpthese -1, { AoA => q[ my $AoA = diffAoA( $N ); # pp $AoA; for my $pair ( @{ $AoA } ) { my( $x, $y ) = @{ $pair }; } ], packed => q[ my $packed = diffPacked( $N ); # pp $packed; while( length( $packed ) ) { my( $x, $y ) = unpack 'VV', substr( $packed, 0, 8, '' ); } ], twoDel => q[ my $string2d = diff2dString( $N ); # pp $string2d; for my $pair ( split ' ', $string2d ) { my( $x, $y ) = split ':', $pair; } ], list => q[ my @array = diffList( $N ); # pp \@array; while( @array ) { my( $x, $y ) = ( shift @array, shift @array ); } ], }; __END__ C:\test>diffBench.pl -N=10 Rate twoDel AoA packed list twoDel 66743/s -- -41% -52% -62% AoA 113285/s 70% -- -19% -36% packed 139015/s 108% 23% -- -21% list 175732/s 163% 55% 26% -- C:\test>diffBench.pl -N=100 Rate twoDel AoA packed list twoDel 7440/s -- -57% -59% -66% AoA 17343/s 133% -- -4% -21% packed 18033/s 142% 4% -- -17% list 21849/s 194% 26% 21% -- C:\test>diffBench.pl -N=1000 Rate twoDel AoA packed list twoDel 704/s -- -58% -64% -67% AoA 1678/s 139% -- -15% -22% packed 1965/s 179% 17% -- -8% list 2143/s 205% 28% 9% -- C:\test>diffBench.pl -N=10000 Rate twoDel AoA packed list twoDel 67.0/s -- -61% -67% -68% AoA 173/s 158% -- -16% -18% packed 205/s 205% 19% -- -3% list 212/s 216% 23% 3% -- C:\test>diffBench.pl -N=100000 Rate twoDel AoA packed list twoDel 6.14/s -- -63% -69% -70% AoA 16.4/s 167% -- -18% -21% packed 20.1/s 227% 22% -- -3% list 20.8/s 238% 26% 4% -- C:\test>diffBench.pl -N=1000000 Rate twoDel AoA packed list twoDel 0.121/s -- -93% -94% -94% AoA 1.64/s 1251% -- -17% -22% packed 1.97/s 1523% 20% -- -7% list 2.11/s 1636% 28% 7% --

        It surprised me how badly the two delimiters idea worked out; and how fast simply returning a list was.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
        I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1131449]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others goofing around in the Monastery: (7)
As of 2024-04-19 13:05 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found