patric:
As you mention later, you *could* just reopen the input file inside your loop. But you'll have to scan through the file once per output file. Or you could open all your file handles at the beginning, or store the data in an array or hash and rescan it from memory. But I usually prefer to do it another way. Why? A big file could take a significant amount of time to scan repeatedly. Storing it in memory could exceed your memory limits. Opening the file handles up front requires you to know all possible file names at the beginning.
What I do is open the output files as I need them. Suppose you had a function get_file_handle that would always give you the correct file handle to output the line to. Then your main loop would simplify to the following (after trimming out some unused variables & such):
#!/usr/bin/perl
use strict;
use warnings;
open(FH,"input.txt")or die "can not open input file\n";
while (my $line=<FH>) {
my (undef, undef, undef, $four, undef) = split("\t",$line);
if ($four=~m/S(\d+)GM/){
my $F = get_file_handle($1);
print $F $line;
}
}
So all we need is that function. It turns out to be surprisingly simple:
my %FHList; # Holds file handles we've opened so far
sub get_file_handle {
my $key = shift;
if (!exists $FHList{$key}) {
open $FHList{$key}, '>', "output_$key.txt" or die $!;
}
return $FHList{$key};
}
As you can see, we just store our file handles in a hash. If the key (00001, 00012, etc.) is a value the function has never seen before, it opens a new output file, and tucks it away in the hash. Then it returns a copy of the file handle from the hash.
...
roboticus