Everything you say is true. I would add that sometimes the memory burden can come from unexpected things. I had a recent experience where one programmer set up an ongoing web-harvest process that, over the span of several months, populated two directories on one disk volume with over 2 million
I don't actually know how long the file names were (they might have been 24 characters or longer), but each unix directory file itself (not a data file, but the directory file) was over 100 MB.
One surprising fact I learned from that situation is that the standard unix "find" utility (freebsd version -- don't know about other flavors), when trying to traverse one of those directories, grew to have an enormous memory footprint. And since "find" is typically used in activities like backups, this became a problem, as the "daily" backup for this one disk was taking more than 24 hours to finish... in large part because the backup server was page-faulting.
(That collection process was retooled to use a different disk volume, which is not being backed up, and the process now includes the creation of a daily tar file on a separate volume that is backed up -- just one new file per day, instead of many thousands.)