Syntactic Confectionery Delight | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
The way I see it, there might be a design problem behind your question. I wonder what are you going to do with 60 million records on the client side ...
I don't think you are going to show the records to a user, because at a rate of 50 records per page, it would take 1,200,000 pages, and no user is willing to go through that, also because, reading one page per second, the task would last more than 13 days. :) So I could see only two reasons for this behavior:
The second case, though, is open for comments. Retrieving and sending 60 million records is going to take a long time to the database. Is there any way that you do all or at least part of the calculation within the database itself? Any database server is capable of some fairly complex calculation that you could exploit before taking the records to the client. If your calculation is going to end up with, say, 100,000 records, the burden to the database is going to be a lot less than getting the whole dataset. If, for any reason, the DBMS is not able to do the entire calculation for you, you could at least try to reduce the number of records to fetch, by reviewing your algorithm, considering both the server and the client sides. Could you tell us more about the nature of your calculation? We might be able to give you some better advice. HTH
In reply to Re: Big database queries
by gmax
|
|