Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Re^2: Parsing .2bit DNA files

by bart (Canon)
on Mar 06, 2008 at 11:48 UTC ( [id://672436]=note: print w/replies, xml ) Need Help??


in reply to Re: Parsing .2bit DNA files
in thread Parsing .2bit DNA files

Your idea of looking up the meanings of the sequences byte per byte is brilliant. I do have some doubts about using glob for it... but it even appears to do the right thing on ActivePerl on Windows. Still, I'm wondering if this is not just pure luck.

A reliable way to do it would be to generate a list of integers, in this case from 0 to 255, and convert each to a string using base 4 — admittedly, I don't know how to best do it in Perl... As a second step, I'd convert the digits '0' .. '3' to the letters, for example with

tr/0123/TCAG/

Anyway, you say

at the expense of having a lookup table for all 256 values
WTF? What expense is that? A few k of memory? Seriously, if the proper way to generate the array of meanings is too expensive, I'd just generate it once at startup, and store it in memory.

You could also experiment with different trade-offs on lookup table sizes:
Yes, but in that case, the lookup table gets much bigger: 64k entries of 8 letters each, that is 256k of text plus overhead of the array. Ouch. I don't think it will be much faster, so I don't think it's worth it.
For some reason, I had byte-order issues doing this.
Of course you have. You used a machine dependent byte ordering. You should either use 'n' or 'v' as the basic unpack template (probably 'n', for Big Endian), which luckily appears to produce unsigned integers, too.

I have some doubts about using unpack "C*", $raw to convert the byte sequence into numbers. Ouch. That sequence can be millions or even billions of bytes long, and that is a very long list. I think it's better to convert the $raw string either in short sequences of, say, a few k each (the compromise is in loop count vs. memory usage per loop),

my $dna = ''; use constant CHUNKSIZE => 2048; for (my $offset = 0; $offset < length($raw); $offset += CHUNKSIZE) { $dna.= join '', @CONV[ unpack 'C*', substr $raw, $offset, CHUNKSIZE + ]; }
or maybe even byte per byte with s///:
s/(.)/$CONV[ord $1]/sge
but I doubt this will be the fastest way. It'll be as memory cheap as possible, that is true.

Finally: don't forget to cut off the junk at the end of the sequence, making the length the same as the number of entries there were expected according to the record header.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://672436]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others browsing the Monastery: (3)
As of 2024-04-20 01:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found