|We don't bite newbies here... much
Re^6: avoiding a race (place snark here)by tye (Sage)
|on Sep 29, 2010 at 20:53 UTC
And even in a full directory and an a loaded system, that time is going to be measured--assuming you can actually measure it at all--in low milliseconds at the most.
No, not at all. Opening the file requires finding the file which requires traversing the (possibly long) directory contents yet again (and thus contending with all of the mutex contention again also). With NTFS or a newer Linux file system (with the proper options enabled), then the directory won't be stored as a simple list and the performance is probably not as easily pathological. A few months ago I again ran into a directory with way too many files it in and it took many seconds, even minutes, to open a file (or to remove one). I haven't tried to replicate the problem on a more modern filesystem to see how well it scales. But I suspect there are plenty of file systems left in the world that were built without hash/tree directories.
And then only if the time-stamp resolution of the file-system is sufficient to actually discern the difference, which is unlikely.
And there you have your broken analysis, again. If X and Y fail to find 'file1' and then both create it, then the fact that the timestamp is not changed by whichever attempt is second has no bearing on the fact that both X and Y will then go on to send an e-mail. (Or, you can remove the race.)