Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Re^2: Huge files manipulation

by JavaFan (Canon)
on Nov 10, 2008 at 15:50 UTC ( [id://722673]=note: print w/replies, xml ) Need Help??


in reply to Re: Huge files manipulation
in thread Huge files manipulation

One reason for not using sort -u or uniq commands is if you wish to retain the original ordering (minus the discards).

As said earlier in the thread, if you want to keep ordering, just add the line number, sort, uniquify, sort on the line number and cut. Or, as a one liner:

nl -s '|' file_with_dups | sort -k 2,8 -t '|' -u | sort -nb | cut -d ' +|' -f 2- > file_without_dups
This is a standard trick.

Replies are listed 'Best First'.
Re^3: Huge files manipulation
by BrowserUk (Patriarch) on Nov 10, 2008 at 16:05 UTC
    This is a standard trick.

    Yes. And it requires 4 passes of the dataset, including two full sorts.

    And takes 5 or 6 times as long as the script above, if I use 5 passes ('a-e','f-j', 'k-o', 'p-t', 'u-z') on the same dataset.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Yes. And it requires 4 passes of the dataset, including two full sorts.
      The number of passes isn't really relevant - the sorts maybe. But the main reason I posted the one-liner was that you appear to suggest that the sort solution wouldn't work if you wanted to keep order.
      I use 5 passes ('a-e','f-j', 'k-o', 'p-t', 'u-z') on the same dataset.
      And that shows the weakness of your approach. It requires a priory knowledge about the keys. A bad pick of dividing the keys may lead to almost all the keys being handled in the same pass. You'd need to tune your program for different datasets.
        And that shows the weakness of your approach. It requires a priory knowledge about the keys.... You'd need to tune your program for different datasets.

        That's not a weakness--it's a strength. It is very rare that we are manipulating truly unknown data. Using a tailored solution over a generic is often the best optimisation one can make.

        Especially as it only takes 18 seconds of CPU (~1 elapsed minute) to get the information to decide a good strategy:

        [17:30:50.77] c:\test>perl -nle"$h{substr $_, 0, 1}++ } { print qq[$_ ; $h{$_}] for sort keys %h" huge.dat a ; 338309 b ; 350183 c ; 579121 d ; 378275 e ; 244480 f ; 262343 g ; 195069 h ; 218473 i ; 255346 j ; 53779 k ; 42300 l ; 182454 m ; 315040 n ; 126363 o ; 153509 p ; 475042 q ; 28539 r ; 368981 s ; 687237 t ; 303949 u ; 162953 v ; 92308 w ; 155841 x ; 1669 y ; 18143 z ; 10294 [17:32:42.65] c:\test>

        Sure, you could add code to the above to perform that as a first pass and then some bin packing algorithm or other heuristic to try and determine an optimum strategy, but unless you are doing this dozens of times per day on different datasets, it isn't worth the effort. But 5 minutes versus 25 is worth it.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://722673]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others perusing the Monastery: (5)
As of 2024-04-20 01:01 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found