Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re^2: When to use forks, when to use threads ...?

by Krambambuli (Curate)
on Sep 05, 2008 at 08:03 UTC ( [id://709194]=note: print w/replies, xml ) Need Help??


in reply to Re: When to use forks, when to use threads ...?
in thread When to use forks, when to use threads ...?

Care to expand on that ?

Well, I'm not sure how to do that efficiently, it's a rather convoluted story - my baseline so far is that I will avoid for now and the near future to try using threads for parallelizing conditional-insert-or-updates into Oracle.

The base problem is something like 'given a (rather big) amount of new data, add it to the existent tables that are holding it, inserting or updating if similar records are already there. Speed up the process so that it gets done as fast as possible.'

What I seem to have so far is that Oracle's libclntsh.so in conjunction with Perl threads will loose 4 or 8 bytes on every thread-switch. Which thread to use depends on the input record.


Krambambuli
---
  • Comment on Re^2: When to use forks, when to use threads ...?

Replies are listed 'Best First'.
Re^3: When to use forks, when to use threads ...?
by BrowserUk (Patriarch) on Sep 05, 2008 at 08:53 UTC

    For your specific problem I wouldn't use threads either.

    1. Whilst using DBI from a single thread (within a possibly multithreaded process) will not cause problems. Using DB vendor C libraries from multiple threads is fraught with dangers, regardless of whether you do so from Perl via DBI, or your own C program. Some DB vendor C libraries are not themselves thread-safe because they variously:
      • use process ids as keys to internal structures.

        You can imagine the problems this will cause of you try to run two or more threads concurrently accessing the same DB from the same process.

      • Allocate and deallocate prodigous amounts of heap memory--for the transport of the data--which is allocated by the client program and freed by the vendor library or vice versa.

        Unless both the client program and the vendor libraries are using the same underlying (C runtime) memory management libraries--and those memory management routines are thread-safe--then memory leaks can occur.

        It is easy to see how, with the client program allocating heap memory to hold the data it is giving to the DB and the DB vendor libraries freeing that memory once they've dispatch that data to the DB via a socket or pipe, that unless both the client program and vendor libraries are built against exactly the same version of the underlying C runtime, problems can result.

        Eg. If the client libraries are statically linked to (say) GCC CRT v2.9x but your client program (perl) is statically linked against GCC CRT v3.x, then problems can arise. The same thing with MSVCRT7 versus. MSVCRT8 for example.

    2. More importantly, multiprocessing of large volume inserts into a single DB will quite likely slow things down. Regardless of whether you are using forks or threads!

      Think about what is happening at the DB server when you have multiple clients doing concurrent inserts or updates to the same tables and indexes. Regardless of what mechanisms the DB uses for locking or synchronisation, there is bound to be contention between the work being done by the server threads on behalf of those concurrent clients.

      And if you have indexes and foreign keys etc. then those contentions compound exponentially. Add transactions into the mix and things get much slower very fast.

      For mass updates, using the vendors bulk insertion tool from a single process, preferably on the same box as the DB and via named pipes rather than sockets if that is available, will always win hands down, over trying to multiprocess the same updates. Always.

      For best speed, lock the table for the duration of the insert. If possible, drop all the indexes, perform the insertion and then re-build them.

      If dropping the indexes is not possible (as you've mentioned elsewhere), then consider inserting the data into a non-indexed auxiliary table first, and then using an SQL query that runs wholly internal to the DB to perform the updates to the main table, based on data from that auxiliary table. Again, locking both first for the duration.

      Finally, bend the ear of, or employ, a good (means expensive) DBA to set up your bulk insertions and updates processing for you. A few days of a good DBAs time in setting you up properly, can save you money over and over and over. There is no substitute for the skills of a good DBA. Pick one with at least 5 years of experience of the specific RDBMS you are using. More than most other programming fields, vendor specific knowledge is of prime importance for a DBA.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://709194]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (1)
As of 2024-04-18 23:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found