Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re: Parsing .2bit DNA files

by blokhead (Monsignor)
on Mar 06, 2008 at 05:49 UTC ( [id://672371]=note: print w/replies, xml ) Need Help??


in reply to Parsing .2bit DNA files

One thing that strikes me as odd/inefficient is that you are explicily converting to strings of ASCII '0' and '1' to then convert to A, C, G, T. It seems like it would be more direct to convert a byte (4 DNA bases) at a time. Here is a cute way to do that, at the expense of having a lookup table for all 256 values:
my @CONV = glob( "{T,C,A,G}" x 4 ); my $dna = join "", @CONV[ unpack "C*", $raw ];
On my system, this gives the same output as yours. I don't know if it's better, but it is shorter, and it can conveniently use an array instead of a hash. You could also experiment with different tradeoffs on lookup table sizes:
## takes 16 bits (= 8 bases = unsigned short) at a time my @CONV = glob( "{T,C,A,G}" x 8 ); my $dna = join "", @CONV[ unpack "S*", $raw ];
For some reason, I had byte-order issues doing this. Of course, you must also be careful that $raw is padded to a multiple of 16 bits!

Another cute trick I can think of is that you can do some bit-twiddling to implement the M-blocks (apparently lowercasing a range of characters). In ASCII, you can toggle the case of an alphabetic character by bitwise-XOR'ing it with the space character. So I think you can rewrite:

substr($dna, $_, $mblock{$_}, lc(substr($dna, $_, $mblock{$_})))
as
substr($dna, $_, $mblock{$_}) ^= (" " x $mblock{$_});
Alternatively, you could use %mblock to generate a long mask of chr(0)'s and chr(32)'s that you can XOR with the entire $dna. Again, probably not a big deal but certainly higher cute-value.

Of course you could always fix M,N blocks on-the-fly, as you are unpacking them from $raw, but that would require some more work. Since I'm typing one-handed these days and it takes me forever, I think I will pass on playing with some code that does that! ;)

blokhead

Replies are listed 'Best First'.
Re^2: Parsing .2bit DNA files
by bart (Canon) on Mar 06, 2008 at 11:48 UTC
    Your idea of looking up the meanings of the sequences byte per byte is brilliant. I do have some doubts about using glob for it... but it even appears to do the right thing on ActivePerl on Windows. Still, I'm wondering if this is not just pure luck.

    A reliable way to do it would be to generate a list of integers, in this case from 0 to 255, and convert each to a string using base 4 — admittedly, I don't know how to best do it in Perl... As a second step, I'd convert the digits '0' .. '3' to the letters, for example with

    tr/0123/TCAG/

    Anyway, you say

    at the expense of having a lookup table for all 256 values
    WTF? What expense is that? A few k of memory? Seriously, if the proper way to generate the array of meanings is too expensive, I'd just generate it once at startup, and store it in memory.

    You could also experiment with different trade-offs on lookup table sizes:
    Yes, but in that case, the lookup table gets much bigger: 64k entries of 8 letters each, that is 256k of text plus overhead of the array. Ouch. I don't think it will be much faster, so I don't think it's worth it.
    For some reason, I had byte-order issues doing this.
    Of course you have. You used a machine dependent byte ordering. You should either use 'n' or 'v' as the basic unpack template (probably 'n', for Big Endian), which luckily appears to produce unsigned integers, too.

    I have some doubts about using unpack "C*", $raw to convert the byte sequence into numbers. Ouch. That sequence can be millions or even billions of bytes long, and that is a very long list. I think it's better to convert the $raw string either in short sequences of, say, a few k each (the compromise is in loop count vs. memory usage per loop),

    my $dna = ''; use constant CHUNKSIZE => 2048; for (my $offset = 0; $offset < length($raw); $offset += CHUNKSIZE) { $dna.= join '', @CONV[ unpack 'C*', substr $raw, $offset, CHUNKSIZE + ]; }
    or maybe even byte per byte with s///:
    s/(.)/$CONV[ord $1]/sge
    but I doubt this will be the fastest way. It'll be as memory cheap as possible, that is true.

    Finally: don't forget to cut off the junk at the end of the sequence, making the length the same as the number of entries there were expected according to the record header.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://672371]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (3)
As of 2024-04-16 21:04 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found