Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

comment on

( [id://3333]=superdoc: print w/replies, xml ) Need Help??

The problem with that is it takes far longer ( 3 1/2 minutes vs. 10 seconds) than using the system sort utility:

[12:47:29.37] c:\test>junk4 ACGT34.dat >nul [12:50:49.45] c:\test> [12:53:00.73] c:\test>sort ACGT34.dat /O nul [12:53:09.51] c:\test>

And that is just a million 34-char strings, not "2GB". Admittedly, most of the time is spent reading, packing and unpacking the data, rather than in your fine sort routine which takes less than half a second to do its work.

Actually, part of problem is building up that big scalar piecemeal. If you watch the memory usage as that while loop repeatedly expands the $packed, you'll see something like this. Those transient spikes in the memory usage are where Perl/CRT has to go to the OS to grab ever larger chunks of ram into which to copy the slowly expanding scalar, and then frees of the old chunk once it has copied it over. That constant allocation, reallocation and copying really hampers the in-memory approach.

You can knock around 30 seconds off the 3.5 minutes by preallocating the memory. In this version of your code, I use a ram file to allocate a chunk bigger than required, populate it by writing to the ram file, and then truncate the string to its final size using chop which avoids the seesaw affect on the memory allocation:

#!/usr/bin/perl use strict; use warnings; use Sort::Packed qw(sort_packed); my $packed = ''; open RAM, '>', \$packed; seek RAM, 10e6, 0; print RAM chr(0); seek RAM, 0, 0; my ($len); my %val = (A => 0, C => 1, G => 2, T => 3); my %rev = reverse %val; sub compress { my @data = split //, scalar(reverse shift); my $out = ''; for (my $i = 0; $i < @data; $i++) { my $bits = $val{$data[$i]}; defined $bits or die "bad data"; vec($out, $i, 2) = $bits; } scalar reverse $out } sub decompress { my $data = reverse shift; my $len = shift; my $out; for (my $i = 0; $i< $len; $i++) { $out .= $rev{vec($data, $i, 2)} } scalar reverse $out; } while(<>) { chomp; ($len ||= length) == length or die "bad data"; print RAM compress $_; } close RAM; chop $packed while length( $packed ) > $. *9; my $bytes = int(($len * 2 + 7) / 8); my $n = length($packed) / $bytes; sort_packed "C$bytes" => $packed; for (my $i = 0; $i < $n; $i++) { print decompress(substr($packed, $i * $bytes, $bytes), $len), "\n" +; } __END__ [13:50:49.29] c:\test>junk4 ACGT34.dat >nul [13:53:44.53] c:\test>

The result is that the overall time taken is reduced to just under 3 minutes, which leaves the bulk of the time spent packing and unpacking the data. And I cannot see any easy way of speeding that up.

Maybe Perl needs a pack template for dealing with genomic data? Trouble is, there are several variations. As well as ACGT, they also use forms which contain 'N' 'X' and a few other characters for different situations.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

In reply to Re^2: Sorting Gigabytes of Strings Without Storing Them by BrowserUk
in thread Sorting Gigabytes of Strings Without Storing Them by neversaint

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":



  • Are you posting in the right place? Check out Where do I post X? to know for sure.
  • Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
    <code> <a> <b> <big> <blockquote> <br /> <dd> <dl> <dt> <em> <font> <h1> <h2> <h3> <h4> <h5> <h6> <hr /> <i> <li> <nbsp> <ol> <p> <small> <strike> <strong> <sub> <sup> <table> <td> <th> <tr> <tt> <u> <ul>
  • Snippets of code should be wrapped in <code> tags not <pre> tags. In fact, <pre> tags should generally be avoided. If they must be used, extreme care should be taken to ensure that their contents do not have long lines (<70 chars), in order to prevent horizontal scrolling (and possible janitor intervention).
  • Want more info? How to link or How to display code and escape characters are good places to start.
Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others imbibing at the Monastery: (4)
As of 2024-04-23 21:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found