http://qs321.pair.com?node_id=1008288

perltux has asked for the wisdom of the Perl Monks concerning the following question:

Hi, I have an array and want to convert it into a hash where the array values become the keys and the array index numbers become the values of the hash.
The following code does this, but I wonder if there is a more elegant and/or efficient way of doing this?
(The content of the array could be anything, not necessarily the letter sequence I used in my example below)

my @array=qw(a b c d e f g h); my %hash; for (my $idx=0; $idx<@array; $idx++) { $hash{$array[$idx]} = $idx; }

Replies are listed 'Best First'.
Re: better array to hash conversion
by karlgoethebier (Abbot) on Dec 11, 2012 at 12:28 UTC

    Like this..

    use Data::Dumper; my @array = (a..z); my %hash = map { $array[$_] => $_ } 0..$#array; print Dumper(\%hash); __END__ $VAR1 = { 'w' => 22, 'r' => 17, 'a' => 0, 'x' => 23, 'd' => 3, 'j' => 9, 'y' => 24, 'u' => 20, 'k' => 10, 'h' => 7, 'g' => 6, 'f' => 5, 't' => 19, 'i' => 8, 'e' => 4, 'n' => 13, 'v' => 21, 'm' => 12, 's' => 18, 'l' => 11, 'c' => 2, 'p' => 15, 'q' => 16, 'b' => 1, 'z' => 25, 'o' => 14 };

    Update...i cheated this one ;-)

    @hash{@array} = 0..$#array;
    map

    Regards, Karl

    «The Crux of the Biscuit is the Apostrophe»

      Actually I do like your example with 'map' because it made me understand better how 'map' works (even though @hash{@array} = 0..$#array; is probably the better solution here).
Re: better array to hash conversion
by ruzam (Curate) on Dec 11, 2012 at 14:06 UTC

    A little bench-marking can be helpful. For the case of converting an array to a hash where the array values become keys and the array index become values.

    I've bench-marked the OP code as well as the variations presented in response, in addition I added my own solution to the problem (variation3). The clear winner is variation1.

    my %hash; @hash{ @array } = 0 .. $#array;
    #!/usr/bin/env perl use strict; use warnings; use Benchmark qw(:all); my @array=qw(a b c d e f g h); sub original { my %hash; for (my $idx=0; $idx<@array; $idx++) { $hash{$array[$idx]} = $idx;} } sub variation1 { my %hash; @hash{ @array } = 0 .. $#array; } sub variation2 { my %hash = map { $array[$_] => $_ } 0..$#array; } sub variation3 { my $idx = 0; my %hash = map { $_ => $idx++ } @array; } cmpthese(-10, { 'original' => sub{ original() }, 'variation1' => sub{ variation1() }, 'variation2' => sub{ variation2() }, 'variation3' => sub{ variation3() }, });
    results:
    Rate variation2 variation3 original variation1 variation2 142570/s -- -15% -35% -49% variation3 168018/s 18% -- -24% -40% original 220185/s 54% 31% -- -21% variation1 279147/s 96% 66% 27% --

      As the array size grows, it doesn't take long for the OPs original to out pace variation1. It only requires a 200,000 or so for that to happen, and the benefits mount geometrically as the array size grows:

      #!/usr/bin/env perl use strict; use warnings; use Benchmark qw(:all); our @array = 'aaaa' .. 'lzzz'; print "$#array\n"; sub original { my %hash; for (my $idx=0; $idx<@array; $idx++) { $hash{$array[$idx]} = $idx;} } sub variation1 { my %hash; @hash{ @array } = 0 .. $#array; } sub variation2 { my %hash = map { $array[$_] => $_ } 0..$#array; } sub variation3 { my $idx = 0; my %hash = map { $_ => $idx++ } @array; } sub variation4 { my $idx = 0; my %hash; $hash{ $_ } = $idx++ for @array; } cmpthese -5, { 'original' => \&original, 'variation1' => \&variation1, 'variation2' => \&variation2, 'variation3' => \&variation3, 'variation4' => \&variation4, }; __END__ C:\test>junk91 210911 Rate variation2 variation3 variation1 original variatio +n4 variation2 2.08/s -- -2% -36% -38% -4 +2% variation3 2.12/s 2% -- -35% -37% -4 +1% variation1 3.26/s 57% 54% -- -3% - +9% original 3.37/s 62% 59% 3% -- - +6% variation4 3.57/s 72% 68% 9% 6% +--

      (I've added another variation that works better for large arrays.)


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      RIP Neil Armstrong

        This is very interesting. Do you know why - and like to explain it?

        Thanks and best regards, Karl

        «The Crux of the Biscuit is the Apostrophe»

      This is premature optimisation gone mad! For most purposes for speed to be a factor in this decision the array would need to contain of the order of 1 million entries.

      Sure, benchmarks are fun to write (although often hard to make meaningful), but the overwhelming criteria in this sort of coding decision is clarity and maintainability of the code. By that metric on both counts 'original' is way down the list. I'd go for variation 1 or a for modifier version of 'original', either of which is clear, succinct and not particularly prone to coding errors.

      True laziness is hard work
        This is premature optimisation gone mad! For most purposes for speed to be a factor in this decision the array would need to contain of the order of 1 million entries.

        Actually, significant differences start at around 200,000; and as someone who regularly does similar processing with 10s and even 100s of millions, knowing what works quickest whilst avoiding unnecessary memory growth is important.

        And unless you have some psychic insight to the OPs application, you have nothing on which to base your conclusions, so it is they that are "premature".


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        RIP Neil Armstrong

      Many thanks for that. I find it very interesting that the 'for' loop is actually the second fastest solution, faster than the solutions using 'map'.

        A for solution is generally faster than a map, and it has been discussed in map versus for amongst other places. Sometimes however it is more expressive to use "map".

        A Monk aims to give answers to those who have none, and to learn from those who know more.
Re: better array to hash conversion
by clueless newbie (Curate) on Dec 11, 2012 at 12:23 UTC
    @hash{@array}=(0..$#hash};

    Should really read

    @hash{@array}=(0..$#array};

    Thanks for the catch, perltux!

      Shouldn't that be $#array rather than $#hash ?

        Absolutely!

Re: better array to hash conversion
by davido (Cardinal) on Dec 11, 2012 at 18:41 UTC

    Another way of looking at the problem:

    When you convert the array to a hash with keys as the array's values, and values as the array's indices, you pay the price for conversion once. Your subsequent lookups will be quite fast. But you do pay for it; the overhead of the hashing algorithm, combined with the O(n) time complexity of converting the entire array to a hash.

    On the other hand, if all you're interested in is an occasional search that yields an index, you could use List::MoreUtils first_index function:

    use List::MoreUtils 'first_index'; my @array = qw( a b c d e f g h ); my $found_ix = first_index{ $_ eq 'd' } @array; print "Found 'd' at $found_ix.\n"; __END__ output: Found 'd' at 3.

    This avoids the one-time overhead of generating hash keys for the entire structure, and the per-search overhead of hash lookups. But now every lookup will be an O(n) operation. If you're doing a lot of lookups this is a net loss. If you're doing few lookups, it could be a win, which would have to be verified via benchmarking.

    One nice thing about the first_index method is that its semantics are pretty clear. But if you're doing frequent lookups your original idea of using a hash lookup is good.


    Dave

      Thank you very much and best regards, Karl

      «The Crux of the Biscuit is the Apostrophe»

Re: better array to hash conversion
by Anonymous Monk on Dec 11, 2012 at 12:23 UTC

    The following code does this, but I wonder if there is a more elegant and/or efficient way of doing this?

    Really? What search terms did you use to look?

    Did you benchmark?

     my %hash; @hash{ @array } = 0 .. $#array;

      I did do searches for "array to hash conversion" but all results came up with examples that use 'map' and did not seem to work for my specific conversion (the array values becoming the keys and the array index numbers becoming the values).

      Sorry for asking what might look like a dumb question to an expert, but my Perl skills are still limited.

      Anyway, thanks for your reply and thanks to everybody else who replied, too!

        Sorry for asking what might look like a dumb question to an expert, but my Perl skills are still limited.

        :) My intent was not to admonish, I was interested to know your search terms (you should share them if you searched) as this topic can be hard to find.

        I tried a bunch variations, I even resorted to ?node_id=3989;BIT=benchmark%20hash%7B, closest I found was Array to Hash, Using map to create a hash/bag from an array -- not your case exactly, but seems to cover all available syntax -- you could adapt it

Re: better array to hash conversion
by Anonymous Monk on Dec 11, 2012 at 15:14 UTC

    While the hash slice method is the most idiomatic, here's one more variation, based on the OP's code -- just taking out the C-like for loop:

    $hash{ $array[$_] } = $_ for (0..$#array);

    And something that works only after 5.12 but is very easy to read:

    use 5.012; while (my ($idx, $val) = each(@array)) { $hash{$val} = $idx; }