http://qs321.pair.com?node_id=1043872


in reply to Splitting up a filesystem into 'bite sized' chunks

Maybe I should adopt the principle of writing every single terse-comment that I am prone to in a splendiferous loquacious paragraph, or three, in a vain attempt to forestall the “down-vote demons.”   I dunno.   But, wrapped-up in the terse-comment “NFS is a monster” is a very-valid point:   NFS is a network file-system that does not (unlike, say, Microsoft’s famous system) pretend to be otherwise.

With NFS, filesystems can be unfathomably-large, and network transports can be slow, and NFS will still work.   However, all that having been said ... your (Perl-implemented) algorithms must match.   You must, for example, come up with a plausible strategy for “splitting up a filesystem into bite-sized chunks,” whatever that strategy might be, that assumes both that you cannot immediately ascertain how many files/directories are in any particular area of that filesystem, and that you cannot obtain such a count in a timely fashion.   Instead of an algorithm, therefore, you are obliged to make use of a heuristic.

Replies are listed 'Best First'.
Re^2: Splitting up a filesystem into 'bite sized' chunks
by Preceptor (Deacon) on Jul 12, 2013 at 22:23 UTC

    NFS does have it's limitations - one of these is transport layer - you can do 10G multi channel Ethernet but you still have a price to pay with the connection latency. With a large storage environment you get a lot of spindles and controllers, but that only helps if you can go wide on your IO