jperlq has asked for the wisdom of the Perl Monks concerning the following question:

I am looking for a more efficient means of getting data from a directory of tab separated files into hash. The directory is full of files in the format: key\tvalue0\tvalue1\tvalue2\nkey\tvalue0\tvalue1\n ... the individual files are small (~10 lines) but the directory can be fairly large (10,000 files). I am sure there is a better way to do it than the way i have been, so i come to the monks to find the best way. Here is what i have been doing.
my $dir = "/Path/to/a/data/directory"; my %hash; my @ls = `ls $dir`; foreach (@ls){ chomp; next if /\~$/ || !$_; my $file = $_; my $info = `cat $dir/$_`; my (@lines) = split(/\n/,$info); for (@lines){ s/^\s+//; next if /^\#/ || !$_; my ($key, @values) = split(/\t/); $hash{$file}{$key} = [@values]; } }
if there is a better way, glob or readdir, please give an example of how i should adapt my code?