note
graff
<em><blockquote> For each distinct observation number, there many be 1-7 individual 1024x1024 px CCD detectors involved in the observation ... the entire dataset is over 500 observation numbers long...</blockquote></em>
<P>
If I understand you correctly, you have a list of 500 data sets, where each set comprises 1 to 7 matrices of 1K x 1K values (bytes? ints?); your test run of 13 "observation numbers" yielded 51 query executions, so maybe there was an average of about 3 matrices per observation? (13 queries to get the lists of detectors per observation, plus 13 * 3 queries to get the matrices for three detectors, would come to 52 queries -- am I on the right track here?)
<P>
You might want to consider separating the data fetching from the actual computation, especially if you plan to be chewing over this particular set of 500 observations for a while (e.g. trying different statistical summaries, grouping or sorting things in different ways, etc).
<P>
Local disk i/o on whatever machine is running the perl script will be a lot faster, and a lot less wear-and-tear on the DB server, than (re)fetching ... what is it? 500 * (3 or 4) * 1K * 1K * (whatever byte count per CCD pixel), so somewhere between 1.5 GB and 4 GB, or something like that?
<P>
Try just storing all those matrices as local data files, and just do that once. Then write a perl script to do stuff with the local data files. You'll have a lot more flexibility in terms of playing with different strategies for computation and organization that way, and completion time for any given approach will be faster.
688185
688185