[12:47:29.37] c:\test>junk4 ACGT34.dat >nul
[12:50:49.45] c:\test>
[12:53:00.73] c:\test>sort ACGT34.dat /O nul
[12:53:09.51] c:\test>
And that is just a million 34-char strings, not "2GB". Admittedly, most of the time is spent reading, packing and unpacking the data, rather than in your fine sort routine which takes less than half a second to do its work.
Actually, part of problem is building up that big scalar piecemeal. If you watch the memory usage as that while loop repeatedly expands the $packed, you'll see something like this. Those transient spikes in the memory usage are where Perl/CRT has to go to the OS to grab ever larger chunks of ram into which to copy the slowly expanding scalar, and then frees of the old chunk once it has copied it over. That constant allocation, reallocation and copying really hampers the in-memory approach.
You can knock around 30 seconds off the 3.5 minutes by preallocating the memory. In this version of your code, I use a ram file to allocate a chunk bigger than required, populate it by writing to the ram file, and then truncate the string to its final size using chop which avoids the seesaw affect on the memory allocation:
#!/usr/bin/perl
use strict;
use warnings;
use Sort::Packed qw(sort_packed);
my $packed = '';
open RAM, '>', \$packed;
seek RAM, 10e6, 0;
print RAM chr(0);
seek RAM, 0, 0;
my ($len);
my %val = (A => 0, C => 1, G => 2, T => 3);
my %rev = reverse %val;
sub compress {
my @data = split //, scalar(reverse shift);
my $out = '';
for (my $i = 0; $i < @data; $i++) {
my $bits = $val{$data[$i]};
defined $bits or die "bad data";
vec($out, $i, 2) = $bits;
}
scalar reverse $out
}
sub decompress {
my $data = reverse shift;
my $len = shift;
my $out;
for (my $i = 0; $i< $len; $i++) {
$out .= $rev{vec($data, $i, 2)}
}
scalar reverse $out;
}
while(<>) {
chomp;
($len ||= length) == length or die "bad data";
print RAM compress $_;
}
close RAM;
chop $packed while length( $packed ) > $. *9;
my $bytes = int(($len * 2 + 7) / 8);
my $n = length($packed) / $bytes;
sort_packed "C$bytes" => $packed;
for (my $i = 0; $i < $n; $i++) {
print decompress(substr($packed, $i * $bytes, $bytes), $len), "\n"
+;
}
__END__
[13:50:49.29] c:\test>junk4 ACGT34.dat >nul
[13:53:44.53] c:\test>
The result is that the overall time taken is reduced to just under 3 minutes, which leaves the bulk of the time spent packing and unpacking the data. And I cannot see any easy way of speeding that up.
Maybe Perl needs a pack template for dealing with genomic data? Trouble is, there are several variations. As well as ACGT, they also use forms which contain 'N' 'X' and a few other characters for different situations.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|