http://qs321.pair.com?node_id=1000854


in reply to Processing ~1 Trillion records

Obviously, the algorithm that you present probably is not the real one, but this simply looks to me like something that ought to be able to benefit from sorting and/or grouping on the database level.   Is there truly nothing that you can do in that query in order to produce aggregated results?   Also:   you still must have a breakdown of the timing, even if you simply print the time-of-day to STDERR at the point at which the query-prepare is finished and the point at which the first row of data is produced.   I’ve got several terabytes of data storage on my computer right here, and even though it takes a while to move that much data around, and even though it’s not squirting through a large TCP/IP network, I don’t believe for a second that “16 days” can’t be very significantly improved upon.

You should also, just to be sure, explain that query (since it doesn’t use the verb inner join), to make sure that it’s not doing something absolutely insane such as a Cartesian product at any point.   (16 days ... what would do that?   Anything along those lines would.   If there are no indexes, you probably just found your problem, and explain would confirm or deny it.)

Probably the number-one improvement would be any way whatsoever by which you can prevent all that data from being transmitted.   The second would be to avoid a massive hash that must accumulate before its contents can be dumped.   For instance, if the data were or could be indexed by what you call “marker,” then you could select distinct a list of those markers and process them one at a time, perhaps in parallel.   It would no longer have to grind away for 16 days without producing anything and at the ever-present risk of producing nothing at all.   If meaningful, it might be able to say, “I already have that file, and it looks like I don’t need to produce it again.   (If the data were stored back in a table rather than a CSV, the server might be able to do that with the help of a join ... and the whole process might conceivably become the candidate for a stored-procedure or for a process running directly on the database-server, thereby avoiding across-the-network I/O.

Replies are listed 'Best First'.
Re^2: Processing ~1 Trillion records
by aossama (Acolyte) on Oct 25, 2012 at 12:58 UTC
    Actually the algorithm presented is the real one I am using. I don't have access to the data on the database level. I am profiling the script right now and checking the bottlnecks. Also I am trying to use an intermediate tmp database as Jenda said, and trying to use Redis as well.

      Okay... after explaining the query to see how the DBMS actually is approaching it, I would check for indexes and then consider doing a select distinct query to retrieve all of the unique keys.   Then, issue a query for each marker in turn, possibly splitting-out that work among processes, threads, or machines.   In this way, each file can be completely finished and the data in-memory disposed of in anticipation of the next request.

      Seemingly innocuous calls such as keys can be surprisingly expensive, as can sort, when there are known to be a prodigious amount of keys involved.   Hence, I would measure before doing serious recoding.

      “6 days” is such an extreme runtime ... that’s an intuition-based comment ... that there will most certainly turn out to be “one bugaboo above all others,” such that this is the first place and quite probably the only place that will require your attention.

      Why don't you have access at the database level?
      my $dbh = DBI->connect("dbi:Oracle:host=server.domain.com; sid=sid; po +rt=1521", "username", "password")
      This contains all the relevant details to connect via sqlplus or SQL Developer. I don't mean any disrespect, but extracting such a large amount of rows and processing them by hand (or any other tool) can only be several orders of magnitude slower than doing it in SQL or at least PL/SQL. Could you explain the requirement, even in abstract form, here or on Oracle's forum? I'm sure people would come up with suggestions for an improved query.