go ahead... be a heretic | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Two things:
To deal with the first point... Rose::DB::Object is supposed to be a good performer for when you really need it. I know nothing more about it.
I think that Class::DBI's multitude of helper's triggers, accessors, meta data and what not is just too high latency for you. If you are inserting data plain and simple, and you already did the form validation or whatever, then is there any reason (since you're working on performance) not to prepare an insert operation, and then just execute the statement handle on the values directly, without going through Class::DBI's abstractions? For creating objects this may be a smart move, since Class::DBI is not designed for efficient aggregate operations, but for convenient use of a single objects and it's relations. Class::DBI has the concept of essential columns. This could help you a lot with your fetches. Instead of runnning: to check if the object exists (what's your primary key column, btw?), and then running for each field, it would simply do to begin with. The only downside is that if you don't use those columns eventually, the fetch will be for nothing (wasting memory and sending some unnecessary data on the socket to oracle, and making oracle fetch the data without need). The benefit is that if you do need it, the number of queries is reduced by an order of magnitude, and this usually means a huge speed increase. Class::DBI::Sweet has prefetching by means of joins, which is a bit like making a relationship essential. This considerably helps with fetching latency. You didn't mention relationships, but this could help if you do have them. These two together may help reduce this:
Using transactions might help make inserts and updates faster (or slower) by changing the way data is synched to the disk. CDBI's docs have a nice snippet for doing this easily. When AutoCommit is on I think that DBI might be starting a transaction and finishing it for every operation, and that might be hard on the DB.
Class::DBI's autoupdate may be causing you griefe - try turning it off. As for the second part: nearly 15% of the app's time is profiled as purely startup time. If you add the way catalyst lazy loads some objects, the template toolkit compiles templates (Did you give it the COMPILE_DIR option, btw?), and so on and so forth, I think you could safely double that: My point is that if so much time (relatively) is just startup, you are not running the benchmark late enough in the program's life. I suspect that '_prepare' is taking very long for the initiail queries (parse, validate against schema, create sth, etc etc), and that cached queries are actually taking maybe 5% of those 8%. I would look into it. Maybe dprof can let you see the max time, and maybe group the calls into time scales (under 0.0001, under 0.001, under 0.01, etc) - and if you have many fast calls and a few slow calls, then this means you just need to run it a bit more. Furthermore, IMHO 'prepare' is being called way too much if all you are doing is two operations on the same table, a number of times - select and insert. Using your own SQL should solve this problem, as I mentioned above.
-nuffin zz zZ Z Z #!perl In reply to Re: Help prevent a ModPerl application from replacement by Java
by nothingmuch
|
|