Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re^2: Best way to store/access large dataset?

by Speed_Freak (Sexton)
on Jun 22, 2018 at 15:20 UTC ( [id://1217210]=note: print w/replies, xml ) Need Help??


in reply to Re: Best way to store/access large dataset?
in thread Best way to store/access large dataset?

EDIT: I realized that your response was to the title of the thread. So I should clarify that I was leaning towards what was the best way to read in that database data into PERL to manipulate it. And if it would need to be stored in a file instead of memory.

The database that is (will be soon) housing the data is MariaDB. And I think getting that data will be fairly easy. The interface will allow the user to select the items and categories of interest. Which then will trigger the script to create the attribute list by using a series of qualifiers in the SELECT statement. (I'm way oversimplifying, but the database isn't ready for me to even start figuring out how that's going to look.)

That initial pull of data will be around 1.8 billion calculations if the qualifiers are relatively simple. The qualifiers are user definable, so they could range from simple greater than/less than, to various combinations of percentages of different values from the database.
Following that comes this script which will ultimately perform an additional ~49 million calculations on the summary table to find the unique attributes.(A chain of greater than less than qualifiers based on attribute count and category count for each attribute in each category.)

While a spreadsheet can indeed handle this second lift, it takes quite a while, and isn't automated. (All of my proof of concept work has been done in 64 bit excel, which takes about 45 minutes to apply all of the calculations.)

I've had a colleague trying to tackle this in R as well, but he's having limited success due to the data size and R's memory usage. I know he is making headway, but it's not his primary task. And My limited knowledge of PERL is still 100 fold more than my non existent knowledge in R.

I may be wrong, but I see the whole chain of scripts taking quite a bit of time, so I'm wanting to streamline as much as possible in anticipation of having users stack up query requests, with each request being unique.

  • Comment on Re^2: Best way to store/access large dataset?

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1217210]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (8)
As of 2024-04-19 11:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found