Okay... after explaining the query to see how the DBMS actually is approaching it, I would check for indexes and then consider doing a select distinct query to retrieve all of the unique keys. Then, issue a query for each marker in turn, possibly splitting-out that work among processes, threads, or machines. In this way, each file can be completely finished and the data in-memory disposed of in anticipation of the next request.
Seemingly innocuous calls such as keys can be surprisingly expensive, as can sort, when there are known to be a prodigious amount of keys involved. Hence, I would measure before doing serious recoding.
“6 days” is such an extreme runtime ... that’s an intuition-based comment ... that there will most certainly turn out to be “one bugaboo above all others,” such that this is the first place and quite probably the only place that will require your attention.
| |
Why don't you have access at the database level?
my $dbh = DBI->connect("dbi:Oracle:host=server.domain.com; sid=sid; po
+rt=1521", "username", "password")
This contains all the relevant details to connect via sqlplus or
SQL Developer. I don't mean any disrespect, but extracting such a large amount of rows and processing them by hand (or any other tool) can only be several orders of magnitude slower than doing it in SQL or at least PL/SQL. Could you explain the requirement, even in abstract form, here or on Oracle's forum? I'm sure people would come up with suggestions for an improved query. | [reply] [d/l] |