Think about Loose Coupling | |
PerlMonks |
Re^5: Huge files manipulationby BrowserUk (Patriarch) |
on Nov 10, 2008 at 17:36 UTC ( [id://722694]=note: print w/replies, xml ) | Need Help?? |
And that shows the weakness of your approach. It requires a priory knowledge about the keys.... You'd need to tune your program for different datasets. That's not a weakness--it's a strength. It is very rare that we are manipulating truly unknown data. Using a tailored solution over a generic is often the best optimisation one can make. Especially as it only takes 18 seconds of CPU (~1 elapsed minute) to get the information to decide a good strategy:
Sure, you could add code to the above to perform that as a first pass and then some bin packing algorithm or other heuristic to try and determine an optimum strategy, but unless you are doing this dozens of times per day on different datasets, it isn't worth the effort. But 5 minutes versus 25 is worth it. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
In Section
Seekers of Perl Wisdom
|
|