Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re^3: Huge files manipulation

by BrowserUk (Patriarch)
on Nov 10, 2008 at 16:05 UTC ( [id://722677]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Huge files manipulation
in thread Huge files manipulation

This is a standard trick.

Yes. And it requires 4 passes of the dataset, including two full sorts.

And takes 5 or 6 times as long as the script above, if I use 5 passes ('a-e','f-j', 'k-o', 'p-t', 'u-z') on the same dataset.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^4: Huge files manipulation
by JavaFan (Canon) on Nov 10, 2008 at 16:48 UTC
    Yes. And it requires 4 passes of the dataset, including two full sorts.
    The number of passes isn't really relevant - the sorts maybe. But the main reason I posted the one-liner was that you appear to suggest that the sort solution wouldn't work if you wanted to keep order.
    I use 5 passes ('a-e','f-j', 'k-o', 'p-t', 'u-z') on the same dataset.
    And that shows the weakness of your approach. It requires a priory knowledge about the keys. A bad pick of dividing the keys may lead to almost all the keys being handled in the same pass. You'd need to tune your program for different datasets.
      And that shows the weakness of your approach. It requires a priory knowledge about the keys.... You'd need to tune your program for different datasets.

      That's not a weakness--it's a strength. It is very rare that we are manipulating truly unknown data. Using a tailored solution over a generic is often the best optimisation one can make.

      Especially as it only takes 18 seconds of CPU (~1 elapsed minute) to get the information to decide a good strategy:

      [17:30:50.77] c:\test>perl -nle"$h{substr $_, 0, 1}++ } { print qq[$_ ; $h{$_}] for sort keys %h" huge.dat a ; 338309 b ; 350183 c ; 579121 d ; 378275 e ; 244480 f ; 262343 g ; 195069 h ; 218473 i ; 255346 j ; 53779 k ; 42300 l ; 182454 m ; 315040 n ; 126363 o ; 153509 p ; 475042 q ; 28539 r ; 368981 s ; 687237 t ; 303949 u ; 162953 v ; 92308 w ; 155841 x ; 1669 y ; 18143 z ; 10294 [17:32:42.65] c:\test>

      Sure, you could add code to the above to perform that as a first pass and then some bin packing algorithm or other heuristic to try and determine an optimum strategy, but unless you are doing this dozens of times per day on different datasets, it isn't worth the effort. But 5 minutes versus 25 is worth it.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://722677]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others goofing around in the Monastery: (7)
As of 2024-04-25 15:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found