Thanks for you ip. I'm not sure I can pre load the counters with zero as I don't know what the keys will be until I read the tcpdump input. The table format is currently:
ip|count_in|count_out|count_cross
But this is starting to get off topic and I should read up more on Mysql.
(I had hoped, that there was a guide out there that talked about how to drive BerkeleyDB in full {lets do manual locking in perl}) mode and that my search fu was lacking.
| [reply] |
Try it for your benchmarks. If it turns out to be significantly faster, you can probably figure out a protocol to let you use that nearly all the time, and fall back to something slower if you need to. For example, if you use the ip as the primary key and rows are never deleted, you can:
- Try the update. If one row is affected, you are done.
- Otherwise, the row does not exist. Try an insert. If it succeeds, you are done.
- Otherwise, if it failed with a duplicate key, somebody else just inserted it. Retry the update.
- If the update fails again, something is wrong, so give up.
| [reply] |
I did a second run with just updates, with placeholders and separate tables for in/out/cross. The rate went up but only to about 45k/min. The server is running other mysql jobs so that might be the reason. Thanks for your suggestions anyway.
| [reply] |