I agree, though I was less concerned with the behaviour of system utilities. I can always adapt those using aliases or bat files or whatever.
And I'm rarely in favour of "do you really want to do what you have just asked me to do" prompts, especially the accursed pop-up variety.
But when it came to Perl overriding the standard behaviour of a usually safe (on my OS) API to provide compatibility with a potentially destructive (and IMO, questionable) behaviour of a.n.other OS, let's just say it didn't comply with my idea of 'least surprise'.
Perhaps the critisisms of "who reads the documentation anyway" are valid here. I never read the docs for rename simply because I didn't think I needed to. I just cannot see the circumstance where rename failing because the target file exited would ever be a burden. If I know that the file might exist and that I want to overwrite it, then I just attempt to delete it first.
I realise that this would be non-atomic. That there is a chance in a multi-tasking system that another process could re-create the deleted file between the delete and the rename, and the rename would then again fail. But so what? I cannot percieve of any circumstance where the unix behaviour would be the "right thing" in this situation.
I'm not sure if the unix destructive rename is atomic at the syscall level or not, but there are two possiblilities:
- The delete/rename is atomic.
If true, then once my application has renamed the file, then the other application that was trying to create the file will either
- Succeed and overwrite my newly renamed data.
- Fail if he bothered to use an deny-shared open mode.
- The delete/rename is not atomic.
If true, the other app could potentially re-create the deleted file prior to the rename? What then? Does the rename then fail?
I realise that well-written apps that use sensible choices of share flags and/or file permissions can work around this, but it still seems a strange choice of default behaviour.