http://qs321.pair.com?node_id=1214671


in reply to Abusing Map

You could do:

@b = map{ $a[ $_-1 ] + $a[ $_ ] } 1 .. $#a;

For small arrays, it doesn't cost too much.

Update: corrected bounds as pointed out by AnomalousMonk

But post-fix for is probably better:

$b[ $_-1 ] = $a[ $_-1 ] + $a[ $_ ] for 1 .. $#a;

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
In the absence of evidence, opinion is indistinguishable from prejudice. Suck that fhit

Replies are listed 'Best First'.
Re^2: Abusing Map (Corrected second code block)
by ikegami (Patriarch) on May 18, 2018 at 08:11 UTC

    Given,

    my @b; $b[ $_-1 ] = $a[ $_-1 ] - $a[ $_ ] for 1 .. $#a;

    Micro-optimized:

    my @b; $b[ $_ ] = $a[ $_ ] - $a[ $_+1 ] for 0 .. $#a-1;

    Micro-optimized further:

    my @b = @a; $b[ $_ ] -= $b[ $_+1 ] for 0 .. $#b-1; pop @b;

    Well, I suspect those are faster. I didn't actually test.

      Well, I suspect those are faster. I didn't actually test.

      Update: Thanks for everyone who's been following along, especially Eily who pointed out the conceptual error with the string evals. Here then is a hopefully correct version (including BrowserUK's optimised "while" solution) using anon subs which has somewhat more believable figures (Perl is fast, just not that fast :-)

      #!/usr/bin/env perl use strict; use warnings; use Benchmark 'cmpthese'; for my $arrsize (1e5, 1e6, 1e7) { print "Source array has $arrsize elements\n"; my @x; push @x, rand for 1..$arrsize; cmpthese (1e1, { BUK => sub { my @y; $y[ $_-1 ] = $x[ $_-1 ] - $x[ $_ ] for 1 . +. $#x; }, ike1 => sub { my @y; $y[ $_ ] = $x[ $_ ] - $x[ $_+1 ] for 0 .. + $#x-1; }, ike2 => sub { my @y = @x; $y[ $_ ] -= $y[ $_+1 ] for 0 .. $#y- +1; pop @y; }, while => sub { my @y; my $i = $#x; $y[ $i ] = $x[ $i ] + $x[ - +-$i ] while $i; }, }); print '-' x 80 . "\n"; }

      Producing this output:

      Source array has 100000 elements (warning: too few iterations for a reliable count) (warning: too few iterations for a reliable count) (warning: too few iterations for a reliable count) (warning: too few iterations for a reliable count) Rate BUK ike1 while ike2 BUK 32.3/s -- -19% -32% -32% ike1 40.0/s 24% -- -16% -16% while 47.6/s 48% 19% -- -0% ike2 47.6/s 48% 19% 0% -- ---------------------------------------------------------------------- +---------- Source array has 1000000 elements Rate BUK ike1 ike2 while BUK 2.79/s -- -30% -39% -42% ike1 4.00/s 43% -- -12% -16% ike2 4.57/s 63% 14% -- -5% while 4.78/s 71% 20% 5% -- ---------------------------------------------------------------------- +---------- Source array has 10000000 elements s/iter BUK ike1 ike2 while BUK 3.06 -- -18% -28% -31% ike1 2.49 23% -- -11% -16% ike2 2.21 38% 13% -- -5% while 2.10 46% 19% 5% -- ---------------------------------------------------------------------- +----------

      So the "while" approach does just win out in the end.


      The rest of this post is all cobblers. Upon close inspection I've not set up the arrays correctly. It's the cardinal sin of using @a and @b transliterated from the snippets posted. Mea cupla. See above for the updated script and figures.

      I was intrigued, so I did test.

      #!/usr/bin/env perl use strict; use warnings; use Benchmark 'cmpthese'; my @source; for my $arrsize (1e6, 1e7, 1e8) { print "Source array has $arrsize elements\n"; my @source; push @source, rand for 1..$arrsize; cmpthese (1e8, { 'BUK' => ' my @b; $b[ $_-1 ] = $a[ $_-1 ] - $a[ $_ ] for 1 .. +$#a; ', 'ike1' => ' my @b; $b[ $_ ] = $a[ $_ ] - $a[ $_+1 ] for 0 .. $ +#a-1; ', 'ike2' => ' my @b = @a; $b[ $_ ] -= $b[ $_+1 ] for 0 .. $#b-1; + pop @b; ' }); print '-' x 80 . "\n"; }

      With these results:

      Source array has 1000000 elements Rate ike2 BUK ike1 ike2 2347418/s -- -25% -27% BUK 3126954/s 33% -- -3% ike1 3207184/s 37% 3% -- ---------------------------------------------------------------------- +---------- Source array has 10000000 elements Rate ike2 BUK ike1 ike2 2226180/s -- -28% -31% BUK 3079766/s 38% -- -4% ike1 3204101/s 44% 4% -- ---------------------------------------------------------------------- +---------- Source array has 100000000 elements Rate ike2 ike1 BUK ike2 1886792/s -- -28% -30% ike1 2612330/s 38% -- -3% BUK 2705628/s 43% 4% -- ---------------------------------------------------------------------- +----------

      Here's the proper, working code:

      #!/usr/bin/env perl use strict; use warnings; use Benchmark 'cmpthese'; for my $arrsize (1e4, 1e6, 1e8) { print "Source array has $arrsize elements\n"; my @x; push @x, rand for 1..$arrsize; cmpthese (1e7, { 'BUK' => ' my @y; $y[ $_-1 ] = $x[ $_-1 ] - $x[ $_ ] for 1 .. +$#x; ', 'ike1' => ' my @y; $y[ $_ ] = $x[ $_ ] - $x[ $_+1 ] for 0 .. $ +#x-1; ', 'ike2' => ' my @y = @x; $y[ $_ ] -= $y[ $_+1 ] for 0 .. $#y-1; + pop @y; ' }); print '-' x 80 . "\n"; }

      Giving these figures:

      Source array has 10000 elements Rate ike2 BUK ike1 ike2 2309469/s -- -24% -26% BUK 3030303/s 31% -- -3% ike1 3134796/s 36% 3% -- ---------------------------------------------------------------------- +---------- Source array has 1000000 elements Rate ike2 BUK ike1 ike2 2267574/s -- -28% -30% BUK 3164557/s 40% -- -2% ike1 3236246/s 43% 2% -- ---------------------------------------------------------------------- +---------- Source array has 100000000 elements Rate ike2 BUK ike1 ike2 2277904/s -- -23% -28% BUK 2976190/s 31% -- -7% ike1 3184713/s 40% 7% -- ---------------------------------------------------------------------- +----------

      Oddly, this doesn't afect the qualitative results noticeably. ike2 (with the pop) is still consistently the slowest. The other two are roughly the same to within the detection levels of the test but ike1 may be a smidge faster for large arrays.

        You could add the while version to the benchmark; it was the only one I promoted for efficiency.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
        In the absence of evidence, opinion is indistinguishable from prejudice. Suck that fhit