Clear questions and runnable code get the best and fastest answer |
|
PerlMonks |
Re: Re: Sharing database handleby graff (Chancellor) |
on Apr 20, 2002 at 00:40 UTC ( [id://160713]=note: print w/replies, xml ) | Need Help?? |
Another option maybe for the child processes not to write to the database at all (especially in the case with
logging) and for the parent to dump the data into the db at regular intervals.
Well, this may be an option, but it's probably not a good idea. You'd hate to have a bunch of inserts or updates queued in the parent and then have that process go down before it can finish the queue, or find out that the connection dropped for whatever reason since the last time it processed a queue. Has anyone (esp. Marcello) seen data about how much it costs loadwise to have a bunch of child connections? My own (anecdotal, not systematic) experience has been that the number of connections is less important than what those connections are doing. If the SQL demands of any one process are heavy, having more than a couple going at once will hurt, no matter how you do it. If it's easy stuff, then number of concurrent requests is not much of an issue.
In Section
Seekers of Perl Wisdom
|
|