http://qs321.pair.com?node_id=1168499


in reply to Re^5: Variables are automatically rounded off in perl (audiences)
in thread Variables are automatically rounded off in perl

The notion that one grain of sand on a beach is not important, but one hundred are, is dubious to say the least.

My argument was actually that 1 grain of sand on a huge beach is almost never important. I made no statement about how many grains of sand result in what other level of importance. And I don't hear you refuting the claim that one grain of sand on the beach is almost never important (and so people dealing with such situations should expect to have to do a tiny bit of extra work). And I doubt anybody will argue with an assertion that the first significant bit is almost always important. There is, of course, no clear "line" where the bits on one side of the line are clearly important while those on the other side are clearly not important. Yet, in a 'double', we have at least 1 bit that almost always matters and at least 1 bit that almost never matters.

First: it's not about print really. It's about stringification.

Sure, I was talking about the default stringification which I sometimes referred to as 'print' as a shortcut.

Perl scalars can get pPOK from "action at a distance"; and stringification is currently lossy.

Certainly stringification can happen for subtle reasons. You seem to be implying that you can get a loss of precision due to "action at a distance". No, stringification is not lossy in that way. What is lossy is taking the default stringification and then converting that back to a number. A Perl scalar getting stringified does not cause any loss of precision in that scalar (the original numeric value remains along side the stringification).

Frankly, this sounds like you are discouraging perl use for scientific work

Perhaps you jumped to that conclusion because you thought that Perl's default stringification could cause numeric values to lose precision due to "action at a distance"? I certainly was not arguing that Perl is inappropriate for scientific work. I was making the point that just pasting the default stringification output from one set of calculations as input to another set of calculations can be inappropriate in scientific work. But then, my experience is that scientists are aware of this. Though, most scientists are calculating how many significant bits they can claim from their calculations and those are almost always quite a bit fewer than 15 anyway (even 15 digits of accuracy in the measurements going in to the calculations is almost unheard of in science in my experience).

Someone scientifically minded will of course understand the caveats of floating point, but at the same time expect roundtrip accuracy.

What?! You think scientist are prone to take digit strings output from one calculation and paste them in for further calculations and not realize that some precision is lost? My experience is the opposite of that. Though, my experience is also that 15 digits of precision is so far above the significant digits in most scientific calculations that "15 vs 17 significant digits" is something that will often be ignored by a scientist.

- tye        

  • Comment on Re^6: Variables are automatically rounded off in perl (scientists)

Replies are listed 'Best First'.
Re^7: Variables are automatically rounded off in perl (scientists)
by oiskuu (Hermit) on Jul 25, 2016 at 21:08 UTC

    Re: pPOK. I was thinking about things like this, though that's probably a bug in the module. In any case, bad design and bugs compound to make one hellish landscape. It's the difference between "things just work" and "things just don't work".

    Re: one grain of sand. I think I made it clear in the update that one-to-one mapping i.e. identity is important, even if the magnitude isn't. I don't hear you refuting that.

    Basically, there are two desirable properties to have:

    1. exact calculations, free of accumulating errors ("grains of sand"). Perl NV aka double cannot guarantee that, period. Feed complainers => -Mbigrat.
    2. round-tripping conversions. That can be guaranteed! And should. "0.1" -> 0.1 -> "0.1" is about shortest roundtrip, no truncation is necessary.

    Finally, Re: cut and paste computing. Absolutely! Take a printed paper and run those numbers. Repeatability is the cornerstone. If you say you run statistics on IEEE doubles, but your data does not compute, someone will be upset.

      You link to two examples where stringification happened for subtle reasons. Neither of those appear to be situations where loss of precision would have been caused.

      I think I made it clear in the update that one-to-one mapping i.e. identity is important

      Sorry, no, one-to-one mapping and identity are not very useful with floating point values. That is why people soon realize that '==' is pretty much useless with floating point values.

      Consider that you have $foo and $bar where "$foo" eq "$bar". What are the odds that $foo and $bar represent truly distinct values vs. that they represent what should be the same value that are only different because they were calculated via different means? The odds are astronomically in favor of the 2 values actually being inaccurate representations of the same value. So the current stringification is more likely to provide accurate identification than a full-accuracy stringification.

      I can't say I've ever seen memoization done based on a floating point input. Memoization makes sense when you are going to have a fairly small number of different input sets. Where the same input values are likely to repeat in subsequent calculations. Such seems pretty unlikely with floating point values. Trying to think of a situations where such might legitimately be done, I mostly come up with cases where one of the inputs is technically floating point but should be restricted to a small set of possible values. A case where the slightly inaccurate stringification would actually be a benefit.

      Similar, say you have $val where current Perl only stringifies to a few digits but a full-precision dump in base ten would require 17 digits. What are the odds that this value was achieved by combining values that only had a few digits vs that this value was obtained by calculations involving values not close to values having only a few digits vs this value is something like sqrt(0.8)*sqrt(1.8) where odd values were combined but the result really should be just 1.2, not 1.200000000000002? The middle case is extremely unlikely.

      - tye        

        When I choose doubles for whatever the task is (say computing pretty fractals), I expect the full precision of the underlying apparatus. Granted, stringification is not very expedient in this case, and hence, even the more cause for nasty surprises.

        As for what are the odds that truncation accidentally leads to correct result? Sorry but no, hell no, I do not need anybody guessing (poorly) what the result might have been. I just need the calculations done as programmed.


        Edit. I'll add this little anecdote about astronomical odds.

        Two friends are idly looking out the window; the street is all but empty save for occasional traveler passing. One says, "hey, I'll bet a penny for your watch that the next one hundred passers-by are all men!" His mathematically astute friend replies, "taken! You'd better have that coin ready, because the odds for that are, quite, a-stronomical!" Very soon after, the first person slyly remarks, "What do you think, isn't that a soldiers' marching song we hear approaching?"