Do you know where your variables are? | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
If the files are on the same physical device (as BrowserUk asked and you replied yes), then you can eliminate the cost of moving them by assuring you're using a version of 'move' that doesn't do a physical move, just a logical one. If they're on a different device, you're kind of stuck on that point. Are the files individually written as atomic chunks throughout the day and never touched again until you process them nightly? If so, consider this: You can process 80k/hour, but you're acquiring only 21k per hour. You have a surplus of 59k, or to put it another way, each hour's worth of files takes you about 16 minutes to process. So, could you run your script as a cron that fires once per hour for 16 minutes? Or once per half-hour for 8 minutes? Or once per quarter-hour for 4 minutes? I also suggest in such cases proactively cause the process to stop its work after 150% of the expected time slot and log if it didn't finish its work. The next cron will pick up where it left off, but you would like to know if you're getting bogged down in the future. If you choose to take this approach you will need to deal with file locking to assure you're processing complete files, and also to assure that a given file is only dealt with by a single runtime instance of your processing script. If the writing process doesn't lock you would still want to do so to prevent your own processes from stumbling over each other if one happens to run a little long. If the writing process doesn't lock, you could also simply skip any file newer than 5 minutes old just to assure the writing process is done with it (this is making the assumption that the writing process is spitting out a file, closing it, and then leaving it alone from that point forward). Dave In reply to Re: Perl Program to efficiently process 500000 small files in a Directory (AIX)
by davido
|
|