Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Re^3: Speed up file write taking weeks

by dave_the_m (Monsignor)
on Jul 01, 2019 at 10:55 UTC ( [id://11102230]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Speed up file write taking weeks
in thread Speed up file write taking weeks

Generating trillions of records from initial data containing millions of records, then deduplicating that back to millions of records sounds like a really bad idea. If you gave us a better idea of what data you start with (how many rows, columns, how many unique keys, average size of a record etc), and what the final result is that you want, we might be able to come up with some suggestions.

Dave.

Replies are listed 'Best First'.
Re^4: Speed up file write taking weeks
by Sanjay (Sexton) on Jul 02, 2019 at 13:32 UTC

    Let us move from here to the original problem

    Two tables used as "input" to an SQL:

    1. T_1_DAT ( t_1_id* , t_1_data_1 ) - 65 million rows - * = primary key

    2. T_2_KEY ( t_2_id@ , t_2_data_2 ) - 72 million rows - @ = part of primary key and an index on this field alone

    SQL

    insert in table T_3_XYZ ( t_3_id , t_3_data_1 , t_3_data_2 ) select distinct t_1_id , t_1_data_1 , t_2_data_2 from T_1_DAT , T_2_KEY where t_1_id = t_2_id

    Works for databases with multi column distinct facility. 1.7 trillion rows generated if the unique is removed. We are guessing 100 million if unique.

    Now that we are doing it the flat file method, we are "joining" the two files, generating the 1.7 trillion output in instalments. After this we will sort it & "coalesce" it - in stages I guess!

      Well I can't comment about improving the SQL since that's not my expertise, but starting from flat files and using perl, I suggest:

      Sort both input files by key (either when exporting them from the DB or as a post-export step. Then get perl to read the first line from each of the files and extract its key. If the keys don't match, read another line from the file whose key sorts earlier. Once you have identical keys from both files, read in lines from both files while they still have the same key. At this point you will have extracted two lists of data values. From your original example, for the key 'D' they will be (x, y) and (m,n,o). De-duplicate each list individually (assuming that such duplicates are possible) using a pair of hashes. Then calculate the cartesian product of those two lists, and use a third hash to de-duplicate. Then output to a third file a series of lines containing the same key and the various data pairs.

      Then go back to step reading new lines from each file.

      Dave.

      Does the performance improve if you change the query to use proper JOINs instead?

      insert in table T_3_XYZ ( t_3_id , t_3_data_1 , t_3_data_2 ) select distinct t_1_id , t_1_data_1 , t_2_data_2 from T_1_DAT inner join T_2_KEY on t_1_id = t_2_id

      Ideally the query optimizer would recognize that the original WHERE clause is identical to the INNER JOIN clause, but maybe it doesn't?

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11102230]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (4)
As of 2024-04-25 10:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found