The more I work with Perl test suites, the more I dislike the is_same( $got, $want ) methodology.
I've toyed with writing Perl tools so my modules could instead use the methodology of t/feature.t (a Perl script), t/feature.ok (the expected output), and "perl t/feature.t | diff t/feature.ok -".
Then often "fixing" the test suite is as simple as "perl t/feature.t > t/feature.ok" (once you've verified the that the changes are correct).
Even better is to not use plain 'diff' but something that knows how to ignore or transform variant parts of the output (something that I'll probably write up in more detail at some later date; a combination of simple 'quoting', simplistic templates, and simple reverse templating).
Update: BTW, this "easy to 'approve' new UT output" feature isn't the only reason I prefer this style of UT validation. It also means that when a test fails, someone can simply send you the output from "t/feature.t" and you've probably got all of the information you'd be trying to find in the debugger (if you could even get to a debugger in an environment where the problem is reproduced) or be trying to figure out what "debugging" prints to add in order to figure it out. By using 'diff' to validate the test, you've already figured out what is important to display.
It also encourages you to make all of your inner workings have "dump to text" modes, which is often very handy in other phases of maintenance, or even when adding new features. Sure, sometimes Data::Dumper or similar is enough, but custom dumping usually cuts more to the heart of the situation and so is valuable, usually in addition to using a general-purpose dumper.