It is mind boggling to me, seeing many processes running near 100% CPU utilization no matter if involving IO. The OP stated running on a machine having 500 GiB of memory. The beauty of it all is that the 100 GiB NR.fasta file is read into file system cache by the operating system (this is true on Linux - the entire file if available memory). Subsequent scans are then read from cache. Workers do not make an extra copy due to reading from a memory-mapped I/O handle. Meaning that one can run with many workers. The limiting factor is the memory subsystem - how fast.
Well, I enjoyed writing the demonstrations including one with chunking.
Lots of love, Mario
|Replies are listed 'Best First'.|
Re^4: Problem in RAM usage while threading the program
by karlgoethebier (Abbot) on Dec 22, 2019 at 13:07 UTC