Perl: the Markov chain saw | |
PerlMonks |
unique file IDby baxy77bax (Deacon) |
on Apr 30, 2013 at 10:59 UTC ( [id://1031380]=perlquestion: print w/replies, xml ) | Need Help?? |
baxy77bax has asked for the wisdom of the Perl Monks concerning the following question:
Hi I was wondering if anyone has a better solution... Problem: Let say I have this program that writes some temporary information on a disk(in a local directory) which is latter removed. And let say i wish to run that program twice in the same directory (because it is more practical then creating a new directory and then rerunning the program). I am under unix env. In such case if I run the program twice at the same time my temporary files will be overwritten and messed up which leads to bad computation, inconsistent results, scandal (for which i am to blame since i didn't make the program "idiot-proof", though i strictly say in the instructions not to do this, but ok, it is my fault) and finally to threats considering my job. Solution: One solution would be to generate a unique file names by using random characters. However, this does not guaranty that there wouldn't be any collisions in the future, which means that one has to first check if such file already exists or not and the decide if the generated name is ok or not. Second solution would be to have a file where all filenames, connected to that program, are stored. That way program can first look up in that file see the last file name, let say it is a number and by incrementing the number, generate a new file name , save it into that file , use the name and when done delete that name from the file. But to make this "idiot-proof" I wouldn't use this system since there can be collisions when multiple executions do not have the same execution time and are not started at the same time (probably i would need to check if such file already exists or something) So my question is does anyone have a better solution to this problem? Cheers baxy
Back to
Seekers of Perl Wisdom
|
|