|Do you know where your variables are?|
Sync'd writes (and write-thru) can be slow due to the absence of caching, but so are most journalled file-systems.
When part of what you are trying to 'commit' has nothing to do with the database (ie, transition a server to a new state), then you are still not atomic. In BrowserUk's example,
Agreed. That example only works if repeating the performed, but unlogged command over is effectively a noop.
Mind you, breaking processing up into steps such that any given step can be repeated 2 or more times without affecting the overall result is something of a black art in itself. The basic steps are: a) don't discard source data for a given step, until the output data for that step has been successfully processed by the next step. b) discard any source data for this step that is 'incomplete'. Sentinel values are useful for this c) Once the input data for this step--ie. the output of the previous step--has been successfully processed, delete the associated input data to the previous step. Of course, in critical systems, 'delete' is probably spelt 'move to archive'.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.