Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Re^4: copying mysql table data to oracle table

by chacham (Prior)
on Aug 25, 2017 at 15:01 UTC ( [id://1198020]=note: print w/replies, xml ) Need Help??


in reply to Re^3: copying mysql table data to oracle table
in thread copying mysql table data to oracle table

Everybody knows that when a transaction fails, you restart it from the beginning. You don't just drop your change on the floor. That would be silly.

The point is, there's nothing you can do about it, being you haven't locked the table or records.

So you're imagining that we can put the entire update in one big blob of SQL and run it without checking for any error messages? That seems... optimistic. But at least we can agree that using a perl-based cache "to speed things up" is a Bad Idea, right?

If the statement is insert where not exists, there should be no errors, unless something unrelated (like not enough space) crops up. In which case you do not want to handle the error automatically.

The cache i referred to in the database cache where table are often kept for future statements.

if you hold too many row locks on a table, doesn't Oracle automatically upgrade it to a table lock? And isn't holding a table lock for the duration of the import operation potentially disruptive to a busy database? Maybe we need more context than the OP has provided.

If you lock a table with lock table, or records with select for update, iirc, noone else can touch those records. During a single transaction, however, other users are running off the redo cache. So, it's a bit different.

We should not need more context, being this is a pretty standard import operation.

  • Comment on Re^4: copying mysql table data to oracle table

Replies are listed 'Best First'.
Re^5: copying mysql table data to oracle table
by Anonymous Monk on Aug 25, 2017 at 17:28 UTC
    The point is, there's nothing you can do about it, being you haven't locked the table or records.
    I don't understand your statement at all. If your transaction fails, you go back to the beginning, meaning you start a new transaction, fetch the record again, and decide what to do about it. Maybe the intervening update obviated the need for your change, and maybe it didn't. But the point is, that's what you can do about failed transactions. You just retry them.
    The cache i referred to in the database cache where table are often kept for future statements.
    No, I'm talking about the OP's original idea of starting off with a "select * from emp" and throwing the whole thing in a big perl hash for later reference. Bad Idea, right? Right?
    If the statement is insert where not exists, there should be no errors...
    There could be any sort of column constraint violation, like an out-of-range value or a missing foreign key. You at least want to be able to tell the user, "here are the records that didn't get updated."
    We should not need more context...
    The context we need is: can we just lock the table and prevent anybody else from making any updates until we're done, or is that going to cheese too many people off?

      If your transaction fails, you go back to the beginning, meaning you start a new transaction, fetch the record again, and decide what to do about it. Maybe the intervening update obviated the need for your change, and maybe it didn't. But the point is, that's what you can do about failed transactions. You just retry them.

      Imagine for a moment that the script was (accidentally) executed twice, simultaneously. Both scripts lock each other out, and assuming there is no deadlock, both will keep retrying forever. I am not sure what the benefit of an explicit transaction is in this case anyway.

      throwing the whole thing in a big perl hash for later reference. Bad Idea, right?

      Oh, i wasn't referring to that at all. It would only be a bad idea because it causes a data transfer that can be easily avoided.

      You at least want to be able to tell the user, "here are the records that didn't get updated."

      The case here is a one-time data migration. Exception handling for the purpose of human readable messages is not worth the effort. Anyway, the where clause should include all constraint checks anyway. If you need to know what records were not inserted, use the appropriate clause in MERGE, or simply do a where not exists() after the operation is finished, to see what didn't make it.

      can we just lock the table and prevent anybody else from making any updates until we're done, or is that going to cheese too many people off?

      Locking a table is the easiest way, no question, that is for brute force, non-merge anyway. But it is bothersome to others when their queries wait (unless they specify no wait) and will lock all the records in the table until all are done updating. But, considering a merge can be done, there is really no reason to lock anything.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1198020]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (2)
As of 2024-04-20 04:48 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found