My short answer is "gee, I don't know", but since no one else
has answered yet, I'll throw some thoughts out:
-
Have you looked into MySQL replication? You might be able to run duplicates of the database locally on your webservers. (Note: I've heard that MySQL replication is flaky, but then I'm a postgresql snob -- nearly everything related to MySQL seems a little flaky to me.)
-
Have you looked for MySQL administration tricks to speed up the database? Some years back, Tim Perdue tried to benchmark MySQL against PostgreSQL and came to the conclusion that despite it's reputation MySQL wasn't particularly fast. The response of the MySQL camp was that
he just didn't know what he was doing, the database wasn't tuned right, etc. This would seem to indicate there are some tuning secrets that the insiders know, so what are they?
- Maybe you need to think about how you're using Class::DBI. I've been looking at it recently, and my first take is that it makes OOP persistance easy to do at the expense of crippling the database access. It used to be considered a rank newbie mistake to do multiple selects instead of one that uses a table join, but Class::DBI encourages you to do just that. You might want to look into writing some custom queries to dodge around Class::DBI bottlenecks.
- Your ideas about using Cache::Cache or Class::DBI::Cacheable also sound reasonable, but I don't know what to say about your update problem. The kind of thing I usually see done is have the cache expire after a certain fixed period of time, but if you can't live with the data being stale for any substantial length of time that obviously doesn't work for you.
Hopefully someone else will have something more definite for you (though I predict you're going to hear some advice about doing profiling to make sure you know where the bottlenecks are).