Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re: A profiling surprise ...

by graff (Chancellor)
on May 23, 2008 at 21:59 UTC ( [id://688231]=note: print w/replies, xml ) Need Help??


in reply to A profiling surprise ...

For each distinct observation number, there many be 1-7 individual 1024x1024 px CCD detectors involved in the observation ... the entire dataset is over 500 observation numbers long...

If I understand you correctly, you have a list of 500 data sets, where each set comprises 1 to 7 matrices of 1K x 1K values (bytes? ints?); your test run of 13 "observation numbers" yielded 51 query executions, so maybe there was an average of about 3 matrices per observation? (13 queries to get the lists of detectors per observation, plus 13 * 3 queries to get the matrices for three detectors, would come to 52 queries -- am I on the right track here?)

You might want to consider separating the data fetching from the actual computation, especially if you plan to be chewing over this particular set of 500 observations for a while (e.g. trying different statistical summaries, grouping or sorting things in different ways, etc).

Local disk i/o on whatever machine is running the perl script will be a lot faster, and a lot less wear-and-tear on the DB server, than (re)fetching ... what is it? 500 * (3 or 4) * 1K * 1K * (whatever byte count per CCD pixel), so somewhere between 1.5 GB and 4 GB, or something like that?

Try just storing all those matrices as local data files, and just do that once. Then write a perl script to do stuff with the local data files. You'll have a lot more flexibility in terms of playing with different strategies for computation and organization that way, and completion time for any given approach will be faster.

Replies are listed 'Best First'.
Re^2: A profiling surprise ...
by chexmix (Hermit) on May 24, 2008 at 14:48 UTC
    "If I understand you correctly, you have a list of 500 data sets, where each set comprises 1 to 7 matrices of 1K x 1K values (bytes? ints?); your test run of 13 "observation numbers" yielded 51 query executions, so maybe there was an average of about 3 matrices per observation? (13 queries to get the lists of detectors per observation, plus 13 * 3 queries to get the matrices for three detectors, would come to 52 queries -- am I on the right track here?)"

    Sortakinda. The main diff is that each chip/matrix/CCD might only have a few 'hits' on it. So what my two DB queries do is:

    1. for each observation number in the list, return the chips/CCDs that actually have hits;
    2. for each chip/CCD involved in that observation number, return the x and y value of each hit on the chip/CCD.

    Hope that makes more sense. The crunching that follows iterates over the result sets ... but is decoupled from the DB calls.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://688231]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others taking refuge in the Monastery: (3)
As of 2024-04-20 01:43 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found