|Problems? Is your data what you think it is?|
Re: Proof of concept: File::Index (consistancy)by BrowserUk (Patriarch)
|on May 12, 2006 at 20:03 UTC||Need Help??|
Also, when the data file has been modified, and has increased in size, check the consistancy of the last dozen indexes. If they are consistent, only index the new records appended rather than reindexing the whole file.
Actually, checking say log(n) (where n=no of lines) or so randomly selected lines spread across the file for consistancy is probably (probabilistically) sufficient to detect changes, given that any 1-byte change in the length of any line will invalidate the offsets of all subsequent records until an exactly opposite change of length re-syncs them.
Ignoring the possibility of deliberate tampering, any of the maths wizards feel like calculating the odds of a randomly re-written file matching more than say log(n) randomly chosen index offsets from the previous contents consistantly?
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.