Sorry; I'm not sure I see a clear explanation yet. Yes, I understand that for (in the given example) slurps the whole file and while reads line-by-line; that's clear. But why should while necessarily be faster?
I smell the possibility of certain files or certain hardware running faster with for. That may not be the actual case; I'm only saying that, a priori, there's nothing in slurp vs readline to convince me while must be faster. I can easily picture a situation in which individual file accesses were slower, due to contention for the disk, a stingy cache, or something else. Looks as though steve is pointing that way.
Rather than start an argument, though, I'd just like to say that it would be very nice to see some actual benchmarking with various inputs: large files with short lines, large files with long lines, short files, odd shaped files. I don't have the software test experience to write the script but if Someone capable were to offer it, I'd run it and post the results. If we had a few different Monks do this on different platforms, particularly while under various other loads, we might have an objective basis for claims.