Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re^2: Multiple write locking for BerkeleyDB

by dino (Sexton)
on Apr 23, 2008 at 20:47 UTC ( [id://682494] : note . print w/replies, xml ) Need Help??


in reply to Re: Multiple write locking for BerkeleyDB
in thread Multiple write locking for BerkeleyDB

Well I used dbi and a similar syntax to the above "insert with on duplicate key update, to deal with non existing keys (in this case ips)".
I'm a pretty inexperienced sqler so its very possible that its not optimal, but the rate was very poor (30k/min) when compared to the Berkeley alternatives.

Replies are listed 'Best First'.
Re^3: Multiple write locking for BerkeleyDB
by samtregar (Abbot) on Apr 23, 2008 at 21:07 UTC
    Try pre-loading your counters with 0s and just using UPDATE. I bet it's faster than INSERT ON DUPLICATE UPDATE. Assuming you have reasonable hardware 30k/min (500/s) seems pretty poor. But I guess it depends on how many counters there are and how many concurrent connections are trying to write at once.

    -sam

      Thanks for you ip. I'm not sure I can pre load the counters with zero as I don't know what the keys will be until I read the tcpdump input. The table format is currently:
      ip|count_in|count_out|count_cross
      But this is starting to get off topic and I should read up more on Mysql. (I had hoped, that there was a guide out there that talked about how to drive BerkeleyDB in full {lets do manual locking in perl}) mode and that my search fu was lacking.
        Try it for your benchmarks. If it turns out to be significantly faster, you can probably figure out a protocol to let you use that nearly all the time, and fall back to something slower if you need to. For example, if you use the ip as the primary key and rows are never deleted, you can:
        1. Try the update. If one row is affected, you are done.
        2. Otherwise, the row does not exist. Try an insert. If it succeeds, you are done.
        3. Otherwise, if it failed with a duplicate key, somebody else just inserted it. Retry the update.
        4. If the update fails again, something is wrong, so give up.
Re^3: Multiple write locking for BerkeleyDB
by perrin (Chancellor) on Apr 23, 2008 at 20:53 UTC
    For simple things like this, using local sockets with MySQL makes a big difference. It will never be as fast as BerkeleyDB, but it should be very fast.