By the way, to manage the huge disk resources needed, we close the file every 100 million records, compress it & delete the "until-then" file created.
I think that is taking quite some of the time. Did you consider writing directly into a zip-archive? Some of the available modules do have low level methods that let you add data in chunks,
IO::Compress::Zip for instance. They don't have to work on files on disk.
Also, there is of course the easy option of throwing money at the problem, here in form of a faster HDD.
holli
You can lead your users to water, but alas, you cannot drown them.