Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

DBI/MySQL/MariaDB clone tables across DB Connections

by LanX (Saint)
on Feb 15, 2023 at 11:01 UTC ( [id://11150394]=perlquestion: print w/replies, xml ) Need Help??

LanX has asked for the wisdom of the Perl Monks concerning the following question:

Hi

I'm doing complex insertion operations of hierarchical data between tables.

This works quite well, as long as the tables are available within the same DB connection.

Now I have the requirement to allow these across different MariaDB servers, with the same schema.

I already have a solution to use an intermediary temporary table and could

  • SELECT * all -> Perl memory
  • switch connection
  • create a new temp table
  • Perl memory -> INSERT INTO
  • continue with the temp_transfer table
before reinventing the wheel, is there a more generic way to clone a table table across connections?

FWIW: dumping to file is not an option, the transferred data is relatively small and I prefer the flexibility of having the data in memory.

Cheers Rolf
(addicted to the 𐍀𐌴𐍂𐌻 Programming Language :)
Wikisyntax for the Monastery

Replies are listed 'Best First'.
Re: DBI/MySQL/MariaDB clone tables across DB Connections
by Corion (Patriarch) on Feb 15, 2023 at 11:40 UTC

    You could use the execute_for_fetch method of DBI but I would go the route of dumping things to files and then importing from files, since that allows better granularity in the sense that either a table is imported completely or not at all.

      Thanks, I look into it. :)

      execute_for_fetch and execute_array were new to me...

      > I would go the route of dumping things to files and then importing from files

      It's a multi-user web application were users are copying (relatively small amount) of config data across servers.

      Those are "transported" in temp tables which can be easily checked and reproduced.

      So I'd rather avoid managing temp files per user on the web-server...

      Cheers Rolf
      (addicted to the 𐍀𐌴𐍂𐌻 Programming Language :)
      Wikisyntax for the Monastery

        I would go the route of dumping things to files and then importing from files

        This is the way I would usually go...but...
        be aware that it is a great pain if you are using MariaDB System Versioned tables.

        mysqldump will dump the current data but not the historic data and recreating the historic data is a pain as I have found to my cost.

        I made a bad design decision to use 'email' as a primary key on a database where users can have multiple email addresses stored. I don't know what possessed me not to use a system-generated key but I did. I only realised once there was data in the data - both current and historic. I could not find a way to add a field with AUTO_INCREMENT so I ended up copying the current data to a new table, dropping the offending table, rebuilding it with a generated primary key and then reimporting the data to the table. But, being system versioned, I lost all the historic data.

        This wasn't the end of the world as it was all test data. Recently, the production database has been built but not before carefully checking every table for similar traps. I found one - using phone numbers as a primary key!

        It should be possible to restore historical data to system versioned tables by setting the @@timestamp variable but I've not tried it yet. Once I have the production system up and working (launch date is next Monday), I will test writing a backup and restore utility that plays with @@timestamp to properly restore system versioned tables.

      For the records:

      > I would go the route of dumping things to files and then importing from files,

      I just had to go the execute_for_fetch route for yet another project, and it's surprisingly fast, thanks for that! :)

      But ...

      > since that allows better granularity in the sense that either a table is imported completely or not at all.

      When dealing with MySQL or MariaDB this depends on the engine used, since I get an array of potential errors returned from execute_for_fetch ...

      On InnoDB I can then decide to either ->commit or ->rollback and this works fine.

      But on MyISAM transactions and rollbacks are not supported.

      There I have to go the route of using intermediate temporary tables to be safe(r).

      Cheers Rolf
      (addicted to the 𐍀𐌴𐍂𐌻 Programming Language :)
      Wikisyntax for the Monastery

        But on MyISAM transactions and rollbacks are not supported

        They are also quite old and can be unstable. I never did find out why but I had some intermittent data loss from MyISAM so I am now using Aria tables instead, or InnoDB if I really need to enforce referrential integrity (which is rare).

        The only downside I have found with the Aria storage engine is that MySQL Workbench doesn't recognise them so can't generate forward engineered code. But that has the positive of improving my DDL familiarity which had got very rusty!

Re: DBI/MySQL/MariaDB clone tables across DB Connections
by NERDVANA (Deacon) on Feb 17, 2023 at 06:11 UTC
    If I were doing this with DBIC, it would be as simple as making two connections, and then:
    my $txn= $db2->txn_scope_guard; $db2->resultset($table)->populate([ $db1->resultset($table)->hri->all ]); $txn->commit;
    assuming you can disable the foreign key checks on the temporary table. If you can't, you'd have to sort the rows so parents come before children, and then that gets more complicated if there is a auto-increment ID column.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://11150394]
Approved by Discipulus
Front-paged by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others pondering the Monastery: (5)
As of 2024-04-19 23:22 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found