laziness, impatience, and hubris | |
PerlMonks |
Re: Too Many IDsby kcott (Archbishop) |
on Jan 09, 2020 at 08:21 UTC ( [id://11111231]=note: print w/replies, xml ) | Need Help?? |
G'day The_Dj, Although not stated, I'm assuming all id, sn, etc. values are unique. If that's not the case, neither your current solution nor my alternative suggestion will work properly. Instead of recreating the entire hash multiple times, consider just having a single hash with all the data and then simple mappings of sn to id (extending for future requirements). Here's a quick example:
Output:
Having a single data source will reduce the chances of errors and should make maintenance and debugging (if necessary) easier. I see you've used "map BLOCK LIST" and I'm aware that's considered a Best Practice; however, "map EXPR, LIST" is faster and may make a difference, especially when you're dealing with millions of data elements. Use Benchmark to test. See map for more on these two forms as well as an explanation of the unary plus, "map +(...", I used (if you're unfamiliar with that syntax). I've only shown a barebones technique. For production usage, I'd suggest setting up a series of functions, e.g. get_id_for_sn($sn), instead of having to continually hard-code an equivalent $dat_by_id{$map_sn_to_id{$sn}}{id}. — Ken
In Section
Seekers of Perl Wisdom
|
|