Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re^4: efficient perl code to count, rank

by haj (Vicar)
on Jul 18, 2021 at 22:52 UTC ( [id://11135144]=note: print w/replies, xml ) Need Help??


in reply to Re^3: efficient perl code to count, rank
in thread efficient perl code to count, rank

Yeah, agreed on the database-can-read-CSV issue. That eliminates this overhead.

But then, the code example of tybalt98 (I had prepared something very similar to run benchmarks) doesn't swap, regardless of how big the dataset is. Time is more or less linear with the number of records. My (not very up-to-date) system processes about 20000 records per minute, which means I wouldn't stand a chance to process 14M records in four hours. NYTProf shows that most of the time goes into preparation and printing the output file. It doesn't even help a lot if output goes to SSD.

I wonder what indexing you would apply to the problem at hand? If you can provide an example, I'd be happy to run it against my SQLite or postgres server on the same system for comparison. I don't mind working with databases at all (how could I: I've been working as a product manager for database engines for some years). But in this case the suggestions to use a database (or MCE) all came with little concrete help for the OP and his program. tybalt98 and I found an actual performance issue which, when fixed, gives several orders of magnitude acceleration. How much gain do you expect from switching to a database?

How much familiarity with SQL and database functions do the database aficionados expect from the OP? Is this actually helping or is this saying "look how smart I am!"?

Also, when your management likes the output you just produced, they're going to ask for more and more analytics.
I can confirm that from my own experience. But then, management doesn't ask for a 260GB CSV file, they usually want "two or three slides". One of my most successful Perl programs fell into that category. The evaluation ran once per week for several years. It might have been using a database but it didn't. Actually, no one cared.
  • Comment on Re^4: efficient perl code to count, rank

Replies are listed 'Best First'.
Re^5: efficient perl code to count, rank
by LanX (Saint) on Jul 18, 2021 at 23:17 UTC
    Just a reminder, you were the first one suggesting that the OP needs more RAM, see Re: efficient perl code to count, rank

    Anything can be done with Perl, but search and sort operations requiring Perl to keep all data in memory are usually easier solved (read out-of-the-box) with a DB.

    Otherwise they require re-implementing sophisticated algorithms to manually "swap" RAM and Disk structures, which doesn't qualify as out-of-the-box for me.

    NB: But IF the OP really needs such operations is still unclear!

    We are still speculating what exactly he wanted to be ranked/sorted. (like demonstrated, sorting 14m entries is still feasible in RAM with Perl under 2min, but how does it scale with larger data?)

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery

      Yeah, the RAM problem is one one which becomes immediately apparent when looking at the code: Hence my wording "a first guess". As has already been written in this thread (and demonstrated by tybalt89's code), it can be eliminated by working through the file line by line, so with a small change in code the RAM problem does no longer exist. Also, it has nothing to do with sorting, it's just the attempt to slurp a 62GB file into an array. In the followups to the article you quoted "sorting" isn't even mentioned, because it is irrelevant.

      We are still speculating what exactly he wanted to be ranked/sorted.

      Looking at the code presented in the original posting should be considered an option. tybalt89 came up with the following explanation, which matches my own interpretation:

      You were doing the ranking sort for each column...
      I'm simply assuming that the OP's code performs the operation they want to be done, albeit inefficient. In that code there is not one sort over 14M entries, but there are thousand sorts (one per column). The OP's code does these 1000 sorts 14M times, that's why it won't finish in time, even for small arrays.

      I hope that the Monks will eventually give tybalt89's article the ranking it deserves.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11135144]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (7)
As of 2024-04-19 12:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found