Re^3: Perl Program to efficiently process 500000 small files in a Directory (AIX)by BrowserUk (Pope)
|on Mar 17, 2018 at 22:24 UTC||Need Help??|
Then multi-(thread/processing) your problem will not help (much). Adding contention between reading and writing will probably slow things down
And alternative strategy: separate the reading and writing.
First pass reads the files, extracts the relevant field and construct a hash mapping original path/filename to new path/filename.
Second pass reads the filenames again using opendir. That (should) give you the filenames in whatever order the filesystem considers its native ordering. That might be alphabetically sorted, or it might be order by creation date. Whatever, it should be the fastest way to access the on-disk directory structure.
Rational: separating the reading and writing removes contention at the hardware level; renaming in the same order the OS gives you them, reduces inode/FAT32/HPFS cache misses.
Moving (renameing) a file does not cause any (file) data to be duplicated; it is simply a change to a field within the filesystem directory structure. Making that change in the same order the filesystem gives you the names, ensures that the modification is made immediately after the inode/... is read, therefore still in cache, saving a re-read/cache miss et al.; and should be the fastest approach. The filesystem LRU cache is optimised for this case.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
In the absence of evidence, opinion is indistinguishable from prejudice. Suck that fhit