Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

Determining the minimum representable increment/decrement possible?

by BrowserUk (Patriarch)
on Jun 15, 2016 at 14:14 UTC ( [id://1165730]=perlquestion: print w/replies, xml ) Need Help??

BrowserUk has asked for the wisdom of the Perl Monks concerning the following question:

Given an arbitrary floating point value (eg. 1.9041105342991877e+258 or -8.2727285363069939e-293) how can I determine the smallest representable value that can be added or subtracted from that FP value and cause it to actually change?

To demonstrate the problem:

$fp1 = 1.9041105342991877e+258;; $change = $fp1 - 1.9041105342991876e+258; ## note cha +nge from 7 to 6 in the last digit of the number being subtracted printf "% 25.17g\n% 25.17g\n", $fp1, $change;; 1.9041105342991886e+258 0 ## no chang +e $change = $fp1 - 1.9041105342991875e+258; printf "% 25.17g\n% 25.17g\n", $fp1, $change;; ## note cha +nge from 7 to 5 in last digit 1.9041105342991886e+258 0 ## no chang +e $change = $fp1 - 1.9041105342991874e+258; ## note cha +nge from 7 to 4 in last digit printf "% 25.17g\n% 25.17g\n", $fp1, $change;; 1.9041105342991886e+258 2.1337646185215534e+242 ## A change +! ### 1 ### ## Now try the increment instead of decrement $change = $fp1 - 1.9041105342991878e+258; ## note cha +nge from 7 to 8 in last digit printf "% 25.17g\n% 25.17g\n", $fp1, $change;; 1.9041105342991886e+258 -2.1337646185215534e+242 ## A change + straight away! ### 2 ### ##### Note that whilst the sign of the change at ### 1 ### & ### 2 ### + is different, the absolute value is the same.

So a restatement of the problem might be: Given 1.9041105342991877e+258 find the smallest number that can be both added to and subtracted from that number and will cause it to change value. Ie. 2.1337646185215534e+242


But the problem gets more interesting.

Now I try to work with the second value in the (eg.) above: -8.2727285363069939e-293

$fp2 = -8.2727285363069939e-293;; printf "% 25.17g\n", $fp2;; -7.9999999999999948e-293 ### WTF? ###

Please note: that number (-8.2727285363069939e-293) was output from a perl program. Though it is hard to reproduce, as there were random numbers involved.

My only explanation for why it comes out differently to what was entered is that -8.2727285363069939e-293 must be a "denormal number which can be produced as a result of calculations, and will be displayed, but when you initialise a variable with that value, it will normalise it; hence what comes out is different to what goes in.

All of which makes the problem I'm looking for a solution to -- that of finding the minimal representable change -- more complicated.

At this point someone is going to point me at this. Please don't, I've read it.

And someone is going to ask what is this for, which I could answer, but it would take us way off topic and doesn't change the posed problem.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
In the absence of evidence, opinion is indistinguishable from prejudice. Not understood.

Replies are listed 'Best First'.
Re: Determining the minimum representable increment/decrement possible?
by davido (Cardinal) on Jun 15, 2016 at 15:52 UTC

    This post is based on the following theory:

    <theory>Since Perl's floating point precision is constrained by what can be stored in an NV, the minimal change to a floating point value that Perl should be able to accommodate would be obtained by twiddling the least significant bit in an NV.</theory>

    Here's an example (which may be completely bogus) of what I'm getting at. Unfortunately my understanding of Perl's internal representation of a float within an NV is limited, so this approach is at best just intended to explore the possibility.

    use strict; use warnings; my $fp = 1/2; # m/2^n may be represented perfectly in base 2. printf "%25.32g\n", $fp; my $p = pack 'F', $fp; my $b = unpack 'b*', $p; print "$b\n"; $b =~ s/(.)/$1 ? 0 : 1/e; print "$b\n"; my $np = pack 'b*', $b; my $nfp = unpack 'F', $np; printf "%25.32g\n", $nfp;

    The output:

    0.5 0000000000000000000000000000000000000000000000000000011111111100 1000000000000000000000000000000000000000000000000000011111111100 0.50000000000000011102230246251565

    So it would appear that the least significant change that can be represented in an NV that was holding the decimal floating point value of 0.5 would be 0.00000000000000011102230246251565.

    In my example, I started with a value that I knew could be represented precisely in native base-2 format, and essentially ended up with another number that can be stored precisely in native base-2. It just happens to be a number that is rather ugly in base 10.

    Apologies if this is getting off into the wrong field of weeds, or if my example inadequately implements the theory I put forth, but the topic seemed interesting and I thought I'd post my exploration.


    Dave

      Following up on the excellent answer davido came up with, and using the code he gave, 1.9041105342991877e+258 gives the binary value 0100001110111101100010001110100001001000001011011111000110101110

      Going the other way, the binary string just obtained is displayed as 1.90411053429919e+258

      Increasing the least significant bytes to see when we see a change,

      0100001110111101100010001110100001001000001011011111000110101110 - sti +ll 1.90411053429919e+258 1100001110111101100010001110100001001000001011011111000110101110 - sti +ll 1.90411053429919e+258 1110001110111101100010001110100001001000001011011111000110101110 - sti +ll 1.90411053429919e+258 1111001110111101100010001110100001001000001011011111000110101110 - sti +ll 1.90411053429919e+258 1111101110111101100010001110100001001000001011011111000110101110 - sti +ll 1.90411053429919e+258 1111111110111101100010001110100001001000001011011111000110101110 - fin +ally 1.9041105342992e+258

      Why did we have to go in 6 bits from the end to see a change in the floating point representation?

      There are 53 bits in the mantissa, and if the last 6 don't show, that means that there are only 47 being used for the display in floating point. log(2^47)/log(10) gives us about 14 significant digits. This is what the answers above seem to indicate.

      I think that the answer is going to depend on how many digits the machine displays for a given IEEE754 value. For my machine, I need to change the value by roughly about 10^-14 from its original value to see a change.

      P.S. I hope I transcribed everything correctly here. As always, I have to use another machine to run the perl.

        Excuse me while I clear out my cranial flatus. I was not forcing the maximum precision. I was using the default. If I add the printf "%25.32" as dvido had I would have gotten slightly different results:

        0100001110111101100010001110100001001000001011011111000110101110 - 1.9 +041105342991886e+258 1100001110111101100010001110100001001000001011011111000110101110 - 1.9 +041105342991888e+258

        It would appear that in this case, 1.9041105342991887e+258 is not possible to store. Using all 53 bits of mantissa, I still get about 15.95 digits instead of the 14. Each case will be different and require converting to binary, twiddling the bits, and seeing what you get.

      But it still comes down to the definition of floating point. There are fifteen or sixteen decimal digits maximum in Perl's floating point implementation, so stagnation will occur at either the sixteenth or seventeenth significant digit, irrespective of the exponent. Having said that there is a limit to the absolute value of the exponent before Perl cries 'Inf'. Curiously, it will also cry Inf when approaching zero which seems a bit flaky to me, when it could just zero the thing.

      Update: Inf as in infinity. Ambiguity being a famous line from an old film, 'Carry on Cleo', where Caesar cries: 'Infamy, infamy, they've all go it in fer me'

      One world, one people

      Is your perl compiled to use long doubles for NV?

      I ask because my perl is using normal doubles and I get different results to you:

      printf "% 25.32g\n", unpack 'F', pack 'b*', '1000000000000000000000000 +000000000000000000000000000011111111100';; 0.50000000000000011

      (In general I think you've hit on at least part of the solution to my quest; though it is complicated by the production of denormals; which I'm still trying to wrap my head around; so I ll get back to you once I've wrapped my brain around them.)


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
      In the absence of evidence, opinion is indistinguishable from prejudice. Not understood.

        I think this is due to differences in the C library. You're using a Windows build, right? I think this has been unified in 5.22.

Re: Determining the minimum representable increment/decrement possible?
by Marshall (Canon) on Jun 15, 2016 at 15:40 UTC
    You have one heck of a complicated question!
    There are additional complications with the Intel FPU in terms of reproducible results. The FPU uses 80 bits internally for its calculations so its using more bits for intermediate calculations than went in. You may find this article of interest: Intel FPU Precision . As opposed to your commonly quoted link above, this one is more like: "What most computer scientists probably don't need to know about floating point"! However some of this may be required for your question. I hope this "how the guts work" article helps in at least what the hardware is doing part.
      The FPU uses 80 bits

      That's true of the old 8087 FPUs (and later on-chip compatible units), but (almost*) no compiler more than say 7 or 8 years old still uses the old 80-bit fpr0 .. fpr8 registers and associated instructions.

      My cpu is ~10 years old, and it has MMX (only uses the lower 64-bits of the 87-style registers. ) and SSE, SSE2, SSE3, SSSE3, which use the lower 64-bits of the 128-bit xmmm register set.

      (*The D compiler provides a 80-bit fp native type.)


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
      In the absence of evidence, opinion is indistinguishable from prejudice. Not understood.
Re: Determining the minimum representable increment/decrement possible?
by pryrt (Abbot) on Jun 15, 2016 at 22:14 UTC

    Basically, to determine what the smallest delta for a given floating-point, decompose that value's ieee representation into sign, fraction, and exponent. My perl's NV is double-precision (53bit precision): from MSB to LSB, sign bit, 11 exponent bits, and 52 fractional bits (plus an implied 1), for sign*(1+fract_bits/2^52)*(2**exp), where (which my perl's NV is), the smallest change would be 2**(-52+exp). So you should just need to know your exp for the current representation (not forgetting to subtract 1023 from the 11bit exponent number). If your NV is more precise (lucky you), just adjust it appropriately.

    What I'm unsure about is whether if you take a denormal number, and add a small fraction, whether it re-normalizes it before doing the addition, or whether it's limited by its existing denormal exponent.

    The last time floating point came up (in Integers sometimes turn into Reals after substraction), I started work on a module that will expand an ieee754 double-precision float into sign*(1+fract)*2^exp, based on the internal representation. Unfortunately, that module isn't ready for prime-time. But I'll still link you to a Expand.pm development copy, along with debug.pl (which will eventually become my .t file(s) -- but for now, shows you how I currently can use my functions). This may or may not help you delve deeper into the problem. Right now it's focused on 53-bit precision... and hampered by the fact that I want it to work on a machine at $work that is limited to perl 5.6, so I cannot use the > modifier for pack/unpack) but the same ideas should work for you...

    update: change urls to a more "permanent" location

      Basically, to determine what the smallest delta for a given floating-point, decompose that value's ieee representation into sign, fraction, and exponent. My perl's NV is double-precision (53bit precision): from MSB to LSB, sign bit, 11 exponent bits, and 52 fractional bits (plus an implied 1), for sign*(1+fract_bits/2^52)*(2**exp), where (which my perl's NV is), the smallest change would be 2**(-52+exp). So you should just need to know your exp for the current representation (not forgetting to subtract 1023 from the 11bit exponent number). If your NV is more precise (lucky you), just adjust it appropriately.

      That's a nice insight. Thankyou. If I need to go this route, that is almost certainly the way to do it.

      However, I'm not certain of this yet, but I think my OP question may not actually be required. The problem appears to be -- I'm still struggling with x64 assembler to try and confirm this -- that I'm encountering a lot of denormal numbers; and if I can avoid them, the underlying problem that my OP was an attempt to solve, goes away.

      And the problem with avoiding denormal numbers is that the purpose of the code that is generating them is a Newton-Raphson iteration to converge on zero. The exact place where denormals raise their ugly heads. The solution may be (maybe?) to add a constant (say 1 or 2) to both sides of the equations and converge to that number instead. (Not sure about that; but it might work :)

      I started work on a module that will expand an ieee754 double-precision float into sign*(1+fract)*2^exp, based on the internal representation.

      You might find Exploring IEEE754 floating point bit patterns. (and as you're working with 5.6 , the version at Re^2: Exploring IEEE754 floating point bit patterns.) interesting or even useful.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
      In the absence of evidence, opinion is indistinguishable from prejudice. Not understood.

        Wow! Your convergence criteria are a lot more stringent than mine ever have been, if you're trying to get below a 1bit shift for a 53-bit precision number. (In anything I've tried to converge, I've never cared about less than ~1ppm.)

        Your offset technique to avoid the denormals sounds promising.

        And thanks for the links; I'll keep them bookmarked when I get a change to work on my module more.

        I'm sure you've moved beyond this... but for future searchers:

        While trying to implement some ULP-manipulating routines in my Data::IEEE754::Tools development (which I used to call Expand.pm), I was googling to fix a bug, and was led to Data::Float, which actually has most of the tools I was trying to implement in my module. Using that CPAN module, I was able to write a sub which found the ULP for any finite value (normal or denormal); it returns INF or NAN when one of those was passed.

        use warnings; use strict; use feature qw/say/; use Data::Float qw/pow2 float_parts float_hex significand_bits min_normal min_normal_exp min_finite min_finite_exp max_finite pos_zero neg_zero pos_infinity neg_infinity nan float_is_zero float_is_finite/; sub find_ulp($) { my $nv = shift; my $vh = float_hex($nv); if(!float_is_finite($nv)) { # INF or NAN say "ulp($nv:$vh) = $nv:$vh"; return $nv; } my $e = float_is_zero($nv) ? (min_finite_exp) : ( (float_parts($n +v))[1] - significand_bits ); my $uv = pow2($e); my $uh = float_hex($uv); say "ulp($nv:$vh) = $uv:$uh"; return $uv; } find_ulp($_) foreach (2.0, 1.0, 0.5, 0.16, max_finite, min_normal, min +_normal/2., min_finite, pos_zero, neg_zero , pos_infinity , neg_infin +ity , nan); __END__ ulp(2:+0x1.0000000000000p+1) = 4.44089209850063e-016:+0x1.000000000000 +0p-51 ulp(1:+0x1.0000000000000p+0) = 2.22044604925031e-016:+0x1.000000000000 +0p-52 ulp(0.5:+0x1.0000000000000p-1) = 1.11022302462516e-016:+0x1.0000000000 +000p-53 ulp(0.16:+0x1.47ae147ae147bp-3) = 2.77555756156289e-017:+0x1.000000000 +0000p-55 ulp(1.79769313486232e+308:+0x1.fffffffffffffp+1023) = 1.99584030953472 +e+292:+0x1.0000000000000p+971 ulp(2.2250738585072e-308:+0x1.0000000000000p-1022) = 4.94065645841247e +-324:+0x0.0000000000001p-1022 ulp(1.1125369292536e-308:+0x0.8000000000000p-1022) = 4.94065645841247e +-324:+0x0.0000000000001p-1022 ulp(4.94065645841247e-324:+0x0.0000000000001p-1022) = 4.94065645841247 +e-324:+0x0.0000000000001p-1022 ulp(0:+0.0) = 4.94065645841247e-324:+0x0.0000000000001p-1022 ulp(0:-0.0) = 4.94065645841247e-324:+0x0.0000000000001p-1022 ulp(Inf:+inf) = Inf:+inf ulp(-Inf:-inf) = -Inf:-inf ulp(NaN:nan) = NaN:nan
Re: Determining the minimum representable increment/decrement possible?
by syphilis (Archbishop) on Jun 16, 2016 at 13:18 UTC
    For an Inline::C solution, the following works ok for me with gcc on Windows:
    use strict; use warnings; use Inline C => Config => BUILD_NOISY => 1, ; use Inline C => <<'EOC'; double nxtafter(double in, double dir) { return nextafter(in, dir); } EOC my $pos_inf = (99 ** 99) ** 99; my $neg_inf = -$pos_inf; my $next_down = nxtafter(1.9041105342991877e+258, $neg_inf); printf "%.16e\n", $next_down; my $next_up = nxtafter(-8.2727285363069939e-293, $pos_inf); printf "%.16e\n", $next_up; __END__ Outputs: 1.9041105342991875e+258 -8.2727285363069927e-293
    With the latest MS compiler that I have (14.00.40310.41) I need to replace "nextafter" with "_nextafter" in order to get the script to compile.
    But it then produces incorrect results anyway:
    1.9041105342991884e+258 -7.9999999999999937e-293
    Perhaps a more recent MS compiler fares better.

    Cheers,
    Rob

      I'm probably being thick here, but where does nextafter() come from?


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
      In the absence of evidence, opinion is indistinguishable from prejudice. Not understood.
        ... where does nextafter() come from?

        As pryrt noted, it's in math.h - which perl.h already (indirectly) includes.

        With my MS compiler, I found matches for the string "nextafter" in:
        C:\_64\Platform_SDK\Include\crt\float.h C:\_64\Platform_SDK\Include\crt\fpieee.h C:\_64\Platform_SDK\Include\crt\math.h C:\_64\Platform_SDK\src\crt\float.h C:\_64\Platform_SDK\src\crt\fpieee.h
        Cheers,
        Rob

        it looks like it's in math.h -- I didn't know about it until syphilis's post, either... I love how much I learn here. :-)

        For completeness, nextafter() is specified as part of math.h in C99 (section 7.12.11.3 in the N1256 draft).

Re: Determining the minimum representable increment/decrement possible?
by syphilis (Archbishop) on Jun 16, 2016 at 09:34 UTC
    Firstly, don't assume that perl will assign either of those values correctly. At around those precisions, it's not uncommon for perl to assign values that are off by (up to) a few ULPs.

    I would do it with Math::MPFR (though the real credit goes to the mpfr library):
    use strict; use warnings; use Math::MPFR qw(:mpfr); # Set $prec and $out_prec appropriately my $prec = 53; # NV has 53-bit significand # my $prec = 64; # NV has 64-bit significand # my $prec = 106; # NV has 106-bit significand # my $prec = 113; # NV has 113-bit significand Rmpfr_set_default_prec($prec); my $str1 = '1.9041105342991877e+258'; my $str2 = '-8.2727285363069939e-293'; my $obj1 = Math::MPFR->new($str1); my $obj2 = Math::MPFR->new($str2); Rmpfr_nextbelow($obj1); Rmpfr_nextabove($obj2); print "$obj1\n$obj2\n"; # To convert the objects to NVs: my $nv1 = Rmpfr_get_NV($obj1, MPFR_RNDN); my $nv2 = Rmpfr_get_NV($obj2, MPFR_RNDN); my $out_prec = 17; # 53-bit significand # my $out_prec = 21; # 64-bit significand # my $out_prec = 33; # 106-bit significand # my $out_prec = 36; # 113-bit significand $out_prec--; printf "%.${out_prec}e\n%.${out_prec}e\n", $nv1, $nv2; __END__ Outputs: 1.9041105342991875e258 -8.2727285363069927e-293 1.9041105342991875e+258 -8.2727285363069927e-293
    To put it glibly, we just need to increase/decrease the value by one ULP.

    Cheers,
    Rob
      At around those precisions ...

      I meant "At around those exponents... ".

      I would have made this correction as an update to the original post .... but the option to make that update is seemingly currently unavailable.

      UPDATE: hippo has just pointed me to the thread where this recent change (which occurred while I was away) was discussed.
      ++ to BrowserUk for pushing for the option to re-instate the update box to its original location and ++ to jdporter for enabling that option.

      Cheers,
      Rob
Re: Determining the minimum representable increment/decrement possible?
by syphilis (Archbishop) on Jun 17, 2016 at 12:38 UTC
    $fp2 = -8.2727285363069939e-293;; printf "% 25.17g\n", $fp2;; -7.9999999999999948e-293 ### WTF? ###


    Hadn't noticed this before - a perl that produces that result is simply brainfucked.
    I can reproduce that garbage on a perl-5.18.0 that I built with my Platform_SDK compiler on Windows 7:
    C:\>perl -le "printf '%.16e', -8.2727285363069939e-293;" -7.9999999999999948e-293
    Yet, according to the very same perl, the 2 values are entirely different:
    C:\>perl -le "print scalar reverse unpack 'h*', pack 'd<', -7.99999999 +99999948e-293;" 83465a792c83e3b6 C:\>perl -le "print scalar reverse unpack 'h*', pack 'd<', -8.27272853 +63069939e-293;" 83498bf832dfdfab
    UIM, this is not perl's fault. (But I am mistaken - see UPDATE and UPDATE2.) Perls built (from the same source) with gcc on the same machine do not suffer this problem.
    I would not rely on the accuracy of *any* floating point result that such a perl produced.

    UPDATE: After some double-checking I have found that the mingw-w64 gcc-4.7.x x64 compilers built perl-5.18.x with exactly the same problem as the Platform_SDK compiler build.
    The problem went away with perl-5.20.0 onwards, but I don't know if that was because of changes to the perl source. I used later versions of gcc to build 5.20.0 onwards - and maybe that's what fixed the issue.

    UPDATE2: The problem with 5.18.0 is simply that perl assigns the wrong value to the NV - instead of assigning the hex format 834a6aec8f941351, it assigns 83498bf832dfdfab, which by my calculation is off by 245,141,108,700,070 ULPs. (Such inaccuracies are usually quite small, and I was thrown by the large size of this one.)

    Cheers,
    Rob

      Indeed. The previous post was done using 5.10.1; this is 5.22:

      C:\Program Files>\Perl22\bin\perl.exe \perl22\bin\p1.pl printf "% 25.17g\n", -8.2727285363069939e-293;; -8.2727285363069883e-293 ## manually reali +gned to highlight the difference. [0]{} Perl>

      Of more importance is the source of the numbers that apparently cannot be represented by 64-bit floating point. At least in perl.

      They are produced & output by the C++ code I'm optimising. I originally thought that the C++ math was producing denormals and C++ was outputting them unnormalised, but that does not seem to be the case:

      print join ' ', unpack 'a1 a11 a52', scalar reverse unpack 'b64', pack + 'd', $_ for -8.2727285363069939e-293, -8.2727285363069883e-293;; 1 00000110100 1010011010101110110010001111100101000001001101001100 1 00000110100 1010011010101110110010001111100101000001001101000111

      Even the latest perl seems just to be getting the conversion wrong!? Both numbers are representable with 64-bit Ieee754 numbers, so why Perl should silently convert one to the other is something of a mystery. Basically, Perl's (or the underlying CRT) input routine just seems to be broken.

      Update: The 5.22 perl I'm using is built with the same compiler and libraries as the C++ code; so this seems to be a PerlIO issue rather than a CRT issue.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
      In the absence of evidence, opinion is indistinguishable from prejudice. Not understood.
        printf "% 25.17g\n", -8.2727285363069939e-293;;
        -8.2727285363069883e-293


        Yeah, I get the same with mingw-built x64 5.22.0.
        This is the perl bug I alluded to in my first post that mis-assigns values by up to a few ULPs.
        If you check the hex format (assigned by perl) of -8.2727285363069939e-293 you'll find:
        C:\_32>perl -le "print scalar reverse unpack 'h*', pack 'd<', -8.27272 +85363069939e-293;" 834a6aec8f94134c
        However, the correct hex format of -8.2727285363069939e-293 is 834a6aec8f941351.
        834a6aec8f94134c does in fact correspond to -8.2727285363069883e-293 - so perl is doing the conversion from internal form to decimal string correctly. It's just the initial conversion from decimal string to internal form that was incorrect.

        Sadly, no-one seems interested in fixing this - though I think that's because of the degree of difficulty rather than actual "disinterest".
        If this level of inaccuracy bothers you (as it does me) then one solution is to assign using POSIX::strtod:
        C:\_32>perl -MPOSIX -le "print scalar reverse unpack 'h*', pack 'd<', +POSIX::strtod('-8.2727285363069939e-293');" 834a6aec8f941351 C:\_32>perl -MPOSIX -le "printf '% 25.17g', POSIX::strtod('-8.27272853 +63069939e-293');" -8.2727285363069939e-293
        This way you put your faith in the C compiler and that seems to be a safer bet.
        (That's probably good enough, though I'd rather put my faith in the mpfr library.)

        Cheers,
        Rob
Re: Determining the minimum representable increment/decrement possible?
by BillKSmith (Monsignor) on Jun 15, 2016 at 18:43 UTC

    As is often the case, your difficult problem is really two problems. '1.9041105342991877e+258' is a floating point value to a mathematician, but to perl it is a string. When you convert that string to your computer's floating point and back to a string, the resulting string may not be exactly the same. This has nothing to do with addition. The addition introduces another problem. When the value that you add is sufficiently small, You do not change the floating point representation. If it is just large enough to change the representation, you may not be able to tell after converting back to a string. Sorry, the interaction of these two problems is much to complicated for me.

    A pure perl issue can complicate things some more. A scalar has a string and a numeric value. Perl does an excellent job of choosing the right one, but in special cases, it can surprise us. I doubt that this is a concern for any of your examples.

    Bill
      '1.9041105342991877e+258' is a floating point value to a mathematician, but to perl it is a string.

      Sorry, but that is a red-herring:

      printf "% 25.17g\n", 1.9041105342991877e+258;; 1.9041105342991886e+258

      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
      In the absence of evidence, opinion is indistinguishable from prejudice. Not understood.
Re: Determining the minimum representable increment/decrement possible?
by ikegami (Patriarch) on Jun 15, 2016 at 20:20 UTC

    How portable do you need this to be in terms of Perl version and architecture?

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1165730]
Front-paged by Athanasius
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (2)
As of 2024-04-19 20:02 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found