http://qs321.pair.com?node_id=971949

live4tech has asked for the wisdom of the Perl Monks concerning the following question:

I have a few very large text files (~50MB), millions of rows, 5 columns (numbers separated by spaces). The first row is a 2 column 'header'. Also each row ends in a \r\n and every other row is a \r\n on its own. My task was to do something quick and dirty to cut these large files into smaller files. The resulting smaller files would have 300,000 rows per file. I have been learning Perl to deal with just such tasks (I am still working through the Camel book in my 'spare time'). So I tried the following code:

my $pre = $ARGV[0]; my $linenum = 0; my $filenum = 0; open FILEOUT, '>', $pre."-".$filenum; while (<>) { if ($linenum <= 300000) { if (/^\r\n$/) # skip the linefeed carriage return lines, # do not increment line # counter or print line to file { } else { print FILEOUT $_; $linenum++; } } elsif ($linenum > 300000) { if (/^\r\n$/) # skip the linefeed carriage return lines, # do not increment line # counter or print line to file { } else { $linenum = 0; # reset line counter every 300,000 lines $filenum++; # increment file counter every 300,000 lines # and open new file handle open FILEOUT, '>', $pre."-".$filenum; print FILEOUT $_; } } } close FILEOUT;

This worked great, I just called the script with each filename on the command line one at a time; except that the new files had 299,701 or 299,702 rows instead of 300,000. I cannot understand how this would happen with the above code! It's really been sand in my shorts, but I bet it is something simple, something a good monk could pick up on! THANKS!