Don't ask to ask, just ask | |
PerlMonks |
difficulty with full covariance matrices (math)by etherC (Initiate) |
on Feb 21, 2006 at 07:36 UTC ( [id://531637]=perlquestion: print w/replies, xml ) | Need Help?? |
etherC has asked for the wisdom of the Perl Monks concerning the following question:
First of all, thanks so much to those of you who helped me a year back when I was first starting to learn perl.
Now I've landed an internship where I can finally use it, though now I'm sinking in a huge speech recognition engine trying to adapt it to be able to process full-covariance matrices as opposed to just the diagonal elements of one. This requires a rewrite of the underlying math of the system to work with full-covariance data in all places where diagonal data is being used, and I've done that. The new option is selectable by using a switch at the command line. All the tests I've done show that the new code works, at least in the test environment. If I give the test program a full-covariance matrix that has only diagonal elements (and the rest is filled with 0's) as input, it will give the exact same result as when given an array of those elements (and processed by the old math/system), which makes me think the new math is correct. Calculations by hand with simple full-covariance matrices (not 0's on the diagonal), give the same results as the program spits out. The problem is, when I give the real program diagonal data that's been converted to be in the form of a full-covariance matrix (once again, 0's on the diagonal), the speech recognition results drop by 20%. I've spent a ton of time debugging and so far have found absolutely nothing strange. Has anyone ever encountered such a situation? What finally worked for you? If you haven't had this situation before, does anything jump out at you when you hear about it? Any input would be greatly appreciated. Thanks :) ~etherC~
Back to
Seekers of Perl Wisdom
|
|