Think about Loose Coupling  
PerlMonks 
Re^4: Time series normalizationby 0xbeef (Hermit) 
on Jul 16, 2009 at 18:37 UTC ( #780809=note: print w/replies, xml )  Need Help?? 
I am using your method (except for formatting differences as per what GD::Graph wants) for daily graphs, but the graphs I am referring to here is for a long term trend per managed system. Each managed system could easily contain 20 logical partitions, and for a 3 month trend it could be about 3500  5000 values per LPAR. Using the "select just the 100% common times" method takes about 30 odd seconds to do such a graph for nearly 20 members of the managed system, but to get this result took quite a bit of SQLite3 tuning. The really big problem with just inserting undefs would be the number of samples. If even just one server was set to gather stats at a short interval, each of the other servers involved running at a different interval would now have to include extra empty values. I would therefore be inclined to discard the times for which less than x % of hosts have values, but is this the best solution? Since I have the number of samples per dataseries, is there no way to fit each dataseries between a start and end time using some sort of approximation or mathematical transform? Niel
In Section
Seekers of Perl Wisdom

