Re: Nested evals - are they evil?
by eyepopslikeamosquito (Archbishop) on Jul 31, 2007 at 08:11 UTC
|
As convincingly argued in Perl Best Practices, Chapter 13 (e.g. first guideline is "Throw exceptions instead of returning special values or setting flags"), generally you should embrace exception handling.
However, having each method simply catch and rethrow exceptions does seem evil ... or at least pointless. Maybe there was a historical reason for this?
Is there technical documentation for this system describing why this is being done and, more importantly, describing a rational and consistent error handling policy for the system as a whole?
Generally, throwing should be done when a method detects an error that it cannot resolve itself and catching should be done where there is sufficient knowledge/context to sensibly handle/report the error. Catching just to rethrow seems warranted only at "system boundaries" (e.g. translating a low level error message into a higher-level one); if a method isn't going to handle, translate, or intentionally ignore the error itself, it should simply let it propagate upwards to a caller who can.
| [reply] |
Re: Nested evals - are they evil?
by ikegami (Patriarch) on Jul 31, 2007 at 00:05 UTC
|
I could see this being a way of adding a stack trace if die was used (Copy failed: Unable to save document: Unable to create directory: Insufficient permissions), but confess already does that (albeit using more technical text). An idea why this was done? Could this be a left-over from when die was used instead of confess?
| [reply] [d/l] [select] |
Re: Nested evals - are they evil?
by snoopy (Curate) on Jul 31, 2007 at 00:47 UTC
|
Is all the use of eval/checking as gratuitous as the above - just calling confess to die with a backtrace?
You might be able to install Carp's confess method as your fatal error handler. Then you can eliminate the evals and checking code.
#!/usr/bin/perl
#
# Die with backtrace
#
use Carp;
$SIG{__DIE__} = \&Carp::confess;
callMethod();
sub callMethod
{
# blah blah
someMethod(0);
}
Sub someMethod {
1/shift;
}
Alternatively, you can set up local handlers and eliminate evals, on a case-by-case basis:
sub callMethod
{
local $SIG{__DIE__} = sub {confess ("error calling someMethod: $_[0]
+")};
someMethod(0);
}
| [reply] [d/l] [select] |
Re: Nested evals - are they evil?
by perrin (Chancellor) on Jul 31, 2007 at 02:12 UTC
|
My approach to exceptions is that I never try to catch them unless I think I can do something to fix what went wrong and continue. I wrap the program at the top level to catch exceptions that nothing else has caught and print stacktraces. There's no reason to have more than one eval block if all they're going to do is print a stacktrace. | [reply] |
Re: Nested evals - are they evil?
by ysth (Canon) on Jul 31, 2007 at 00:55 UTC
|
Leave well enough alone?
If you really want to fix things, investigate the all possible causes of exceptions and decide ahead
of time how you want to handle them. Then stop and consider if that really provides any
benefit over how things are now. | [reply] |
|
I'm leaving them for now, but since this is a daemon, I think (though have yet to benchmark) that removing the evals will speed things up a little. Or a lot. If a lot, removing them suddenly becomes essential.
I was just wondering right now if philosophically it was better to work towards removing them.
Hmmmm.
| [reply] |
|
You think that block evals are a performance penalty? I've never heard that. Have you tested that theory at all?
| [reply] |
|
Re: Nested evals - are they evil? (sample benchmarks)
by cLive ;-) (Prior) on Jul 31, 2007 at 04:49 UTC
|
OK, I just made up a quick test case:
use strict;
use warnings;
use Benchmark 'cmpthese';
cmpthese(10000, {
'eval' => sub { eval_code() },
'noeval' => sub { no_eval_code() },,
});
sub eval_code {
my $x = 0;
for (1..1000) {
eval {
$x+=1;
}
}
}
sub no_eval_code {
my $x = 0;
for (1..1000) {
$x+=1;
}
}
And, unless I've missed something blindingly obvious, in this trivial case there is quite an overheard with eval:
Rate eval noeval
eval 1495/s -- -62%
noeval 3953/s 164% --
But let's tweak that for a real life example. Here's some code where it "dies" one time in ten:
use strict;
use warnings;
use Benchmark 'cmpthese';
cmpthese(1000000, {
'eval' => sub { eval_code() },
'noeval' => sub { no_eval_code() },,
});
sub eval_code {
my $x = 0;
for (reverse 0..9) {
eval {
my $x = 100/$_;
};
if ($@) {
error_sub();
}
}
}
sub no_eval_code {
for (reverse 0..9) {
if ($_==0) {
error_sub();
}
else {
my $x = 100/$_;
}
}
}
and here's the benchmark results:
Rate eval noeval
eval 67797/s -- -39%
noeval 111111/s 64% --
Well. That's still significantly faster. So have I proved eval slows things down, or is this a case specific example that someone can find a counter-example for? I've been bitten by benchmarks on code snippets before, so I'm still wary of this result. Hmmmmm....
| [reply] [d/l] [select] |
|
use strict;
use warnings;
use Benchmark 'cmpthese';
cmpthese(-1, {
'eval' => \&eval_code,
'noeval' => \&no_eval_code,
});
sub eval_code {
my $x = 0;
for (reverse 0..9) {
eval {
some_more_code ($_);
};
if ($@) {
error_sub();
}
}
}
sub no_eval_code {
for (reverse 0..9) {
if ($_==0) {
error_sub();
}
else {
some_more_code ($_);
}
}
}
sub some_more_code {
}
sub error_sub {
}
Prints:
Rate eval noeval
eval 185563/s -- -7%
noeval 198639/s 7% --
In words: the processing cost of an unexercised eval is essentially nothing.
DWIM is Perl's answer to Gödel
| [reply] [d/l] [select] |
|
Hmmm, well, I stripped it down further:
use strict;
use warnings;
use Benchmark 'cmpthese';
cmpthese(-1, {
'eval' => \&eval_code,
'noeval' => \&no_eval_code,
});
sub eval_code {
eval {
some_more_code ();
};
if ($@) {
error_sub();
}
}
sub no_eval_code {
if (0) {
error_sub();
}
else {
some_more_code ();
}
}
sub some_more_code { }
sub error_sub { }
And got this (I think the effectively empty eval is meaningless - see here where I ran it 4 times):
Rate eval noeval
eval 1298354/s -- -37%
noeval 2066450/s 59% --
Rate eval noeval
eval 1316991/s -- -38%
noeval 2123851/s 61% --
Rate eval noeval
eval 1217925/s -- -21%
noeval 1534287/s 26% --
Rate eval noeval
eval 1298354/s -- -41%
noeval 2205537/s 70% --
I have a feeling an empty eval gets optimized away at compile time (though I am most probably wrong :)How about if I add in an increment do ensure each eval has to be 'eval'd, ie
use strict;
use warnings;
use Benchmark 'cmpthese';
my $x;
cmpthese(-1, {
'eval' => \&eval_code,
'noeval' => \&no_eval_code,
});
sub eval_code {
$x=1;
eval {
some_more_code ();
};
if ($@) {
error_sub();
}
}
sub no_eval_code {
if ($x==0) {
error_sub();
}
else {
$x=0;
some_more_code ();
}
}
sub some_more_code { $x++ }
sub error_sub { }
I get this:
Rate eval noeval
eval 989400/s -- -23%
noeval 1286220/s 30% --
-bash-3.00$ perl tmp.pl
Rate eval noeval
eval 1092266/s -- -21%
noeval 1379705/s 26% --
Rate eval noeval
eval 1069351/s -- -19%
noeval 1327443/s 24% --
Rate eval noeval
eval 1071850/s -- -25%
noeval 1435550/s 34% --
Rate eval noeval
eval 1071850/s -- -26%
noeval 1456355/s 36% --
Either way, I still haven't been able to create a counter example that shows eval to be faster.
But anyway, considering the number of runs a second, I have a feeling my energies are probably better spent focusing on other parts of the code as far as optimizations are concerned :) | [reply] [d/l] [select] |
|
I would expect eval to be a little slower, but not that much. Maybe this is because of the extra scope you're adding here. Try changing the no_eval_code to this, to see if the extra scoping is free or not:
sub no_eval_code {
for (reverse 0..9) {
if ($_==0) {
error_sub();
}
else {
{
my $x = 100/$_;
}
}
}
}
| [reply] [d/l] |
|
Rate eval noeval
eval 88768/s -- -39%
noeval 146161/s 65% --
| [reply] [d/l] |
Re: Nested evals - are they evil? (not evil enough)
by grinder (Bishop) on Jul 31, 2007 at 11:03 UTC
|
eval {
my ($dev, $inode) = stat($dir) or die "stat $dir\n";
eval {
opendir D, $dir or die "opendir $dir\n";
eval {
my $e = readdir(D) or die "readdir $dir\n";
eval {
open my $in, '<' $e or die "open $e\n";
# etc etc
}
}
}
}
Especially if there was lots of other code intertwined within those blocks, when sprinkled with lots of conditionals on, and assignments to, $@, to confuse the unwary. Now that's what I would call evil nested evals.
Your puny evilness cannot hope to compare with this evilness (which is what I was expecting to find when I read the title of this node). Your code is naught but exception handling, which is not so much evil as merely vile :)
• another intruder with the mooring in the heart of the Perl
| [reply] [d/l] |
Re: Nested evals - are they evil?
by radiantmatrix (Parson) on Jul 31, 2007 at 15:27 UTC
|
This isn't what I'd call "nesting", and yes it is a little redundant. But, it's also harmless.
I'd guess that someone developed callMethod and did some cargo-cult error handling. There's not much point in catching an exception only to throw it again, but perhaps the coder planned on adding some additional handling code.
I'd further guess that later on, some developer (maybe the same one, maybe not) called callMethod, knowing only that it threw an error, and so caught it, and made the same cargo-cult mistake.
If all those various catching blocks (the if ($@) bits) are only re-throwing the error, then they are kind of pointless and redundant. They could probably be removed.
On the other hand, catching an exception and doing something useful about it (writing additional detail to the logs, asking the user what to do, retrying, etc.) is a good pattern, esp. for user-interactive programs. Always do the least surprising thing -- that is, whatever will least surprise your user.
<–radiant.matrix–>
Ramblings and references
The Code that can be seen is not the true Code
I haven't found a problem yet that can't be solved by a well-placed trebuchet
| [reply] [d/l] [select] |