Perl: the Markov chain saw | |
PerlMonks |
Re^2: Efficient way to handle huge number of records?by Anonymous Monk |
on Dec 11, 2011 at 10:22 UTC ( [id://942921]=note: print w/replies, xml ) | Need Help?? |
Ok, first of all thanks to you all for answering to me. Let me explain thoroughly my task: I have a large database of records as I wrote before, in the form of: etc. The NAME is usually 50-60 chars (small) but the SEQUENCE can be from 100 chars to maybe 2-3000 chars as you also pointed out. What my script does (or I want it to do) is: The user supplies an unknown query input, then I run a program called BLAST which tells me that this unknown input had 1,2,3....50...60...200 hits in my database. So, I know the NAME of each hit. My problem is that each time I need to look up the database file and retrieve the respective entries and then create a new file (let's say output file) which will have all NAME and SEQUENCE of the hits to process it further. Plus my ultimate purpose it to create a webserver, so therefore this database file will be accessed quite frequently, that's why I am asking whether a database or a simple local search through a hash (at least that's what I can think about) is more recommended.
In Section
Seekers of Perl Wisdom
|
|