Pathologically Eclectic Rubbish Lister | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
OK. I am just following up after reading the responses so far. I don't think I need to go with a sql server type of solution. My data will only be called from and written to by this one script, and should not ever get larger than what I specified in my inital post (yeah, yeah, famous last words, I know). I do want to be able to grab information from the database without having the wait that goes along with opening a 40MB database. Remember, this is win32 I will be running on - 40MB does have a wait time when opening and saving a single flat file back out. While I should be able to only open the whole file/database once and do all I need from memory, I will be needing to get one piece of information from a row and doing calculations with other data from the same column. These calculations could start involving data from 800-900 rows at a time. Then I will need to get a different piece of information and start all over again. I could see my physical memory getting pushed to the limit in no time with the right calculations and results. I see that I wasn't all that clear originally, but I do have some experience with sql databases. Not enough to be able to say I can do that work full time, but enough to be able to maintain and fix things in a crunch. I just have no experience using perl to access databases. I have read up and see easily enough how to do it, I just have no clue if something works better than something else. Thank you all that have answered so far, and especially to monktim for the pointers. In reply to Re: Selecting the right database for perl
by TacoVendor
|
|