We don't bite newbies here... much | |
PerlMonks |
Re^3: System call doesn't work when there is a large amount of data in a hashby 1nickt (Canon) |
on May 01, 2020 at 01:11 UTC ( [id://11116293]=note: print w/replies, xml ) | Need Help?? |
Hi again, I'll just suggest once more that you let go of the idea that you must load all your data into an in-memory hash in order for your program to be fast. For one very fast approach please look at mce_map_f in MCE::Map (also by the learned marioroy) which is written especially for optimized parallel processing of huge files. (As an aside, have you profiled your code? I would think that Perl could load data from anywhere (file, database, whatever) faster than a shell call to an external analytical program would return ... or does your program not expect a response?)
As far as your finding that "parallelisation of the code after loading the hashes ... turned out slowing down the process or impossible because it would duplicate the hash"... please see MCE::Shared::Hash. Hope this helps!
The way forward always starts with a minimal test.
In Section
Seekers of Perl Wisdom
|
|