I recently posted a question on how I could pipe many files into a single perl script, and received answers such as "use @ARGV", which worked great for small projects. The command line input worked just like I needed it to. The problem, however, is that it turns out I actually have far more files to deal with than I thought, on the order of 100,000 at a time.
Due to the size of the list in the command line, even when I use a wildcard, I get error output like this:
-bash: /usr/bin/perl: Argument list too long
Thus I have a problem. My input list is too long for my program to pipe in. What other solutions do I have? Is using @ARGV still a good idea, or do I have to look at another avenue?
UPDATE!!!:
I got my code to work! Thanks especially to Corion for recommending tye's sort function. Basically, this is how it played out:
@ARGV=glob my $pattern;
my @files=@ARGV;
my @sorted = @file[
map { unpack "N", substr($_,-4) }
sort
map {
my $key = $file[$_];
$key =~ s[(\d+)][ pack "N", $1 ]ge;
$key . pack "N", $_
} 0..$#file
];
@ARGV=@sorted;
while (<>) {
Do my function
}
if (eof(ARGV)) {
Do end of file cleanup
}
Using this format, I was able to still use the <> operator, while piping in a sorted ARGV, so my output came out like this:
file1
file2
....
file10
file11
This is exactly what I wanted. Thanks for all the help!