Perfect! I have been looking for just such a solution this morn and appreciate your post - here is one I just put together really quick ...
to chunk up a given fasta file (unaligned sequences) in to N files for submission of multiple smaller jobs on a cluster:
#!/usr/bin/perl -w
use strict;
my $num_ch = $ARGV[0] or die "must pass the # of chunks $!";
open (FH,"<$ARGV[1]") or die "must pass the file name to make chunks
+of $!";
my @lines = ();
$/ = ">";
while (<FH>){
chomp;
next if /^$/; # get rid of blank lines
push @lines,">$_";
}
close FH;
my $num_rec = scalar(@lines);
print "number of records : $num_rec\n";
#die "Chunks exceed records!!" unless ($num_rec >= $num_ch);
sub write_em {
my $output = shift()."_chunk.fa";
my $ar_ref = shift;
open (QRTR,">$output") or die "Cannot open $output : $!";
print QRTR @$ar_ref;
close (QRTR) or die "cannot close $output : $!";
}
my $cnt = 0;
my $rng = int($num_rec/$num_ch) + ($num_rec%$num_ch ? 1 : 0);
for (1..$num_ch) {
write_em($_,[@lines[$cnt..($_ == $num_ch ? $#lines : ($cnt+($r
+ng-1)))]]);
$cnt += $rng;
}
Feel free to comment or point me in the right direction where I have gone astray - this just scratched a specific itch ...