octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

floating point precision control


From: Mike Miller
Subject: floating point precision control
Date: Sun, 10 Jul 2016 12:48:23 -0700
User-agent: Mutt/1.6.0 (2016-04-01)

jwe, all,

There have been a few related but separately patched problems related to
floating point precision on different systems. I think we may need to
take a step back and decide what the right approach should be on all
systems.

tl;dr: should we just enforce double extended (long double) precision on
all x86 systems all the time, or should we limit when we change it from
the system default?

Links to related issues:

https://savannah.gnu.org/bugs/?40607
https://savannah.gnu.org/bugs/?48319
https://savannah.gnu.org/bugs/?48418

Related changesets (keywords "fpu" and "long double"):

http://hg.savannah.gnu.org/hgweb/octave/rev/9439f3b5c5fa
http://hg.savannah.gnu.org/hgweb/octave/rev/824c05a6d3ec
http://hg.savannah.gnu.org/hgweb/octave/rev/ac9fd5010620
http://hg.savannah.gnu.org/hgweb/octave/rev/79653c5b6147
http://hg.savannah.gnu.org/hgweb/octave/rev/0cd39f7f2409
http://hg.savannah.gnu.org/hgweb/octave/rev/6f0290863d50
http://hg.savannah.gnu.org/hgweb/octave/rev/67a5cb9cd941
http://hg.savannah.gnu.org/hgweb/octave/rev/d18c63a45070
http://hg.savannah.gnu.org/hgweb/octave/rev/a5a99a830c8c
http://hg.savannah.gnu.org/hgweb/octave/rev/79ee6df71b51

Also relevant:

https://gcc.gnu.org/wiki/FloatingPointMath

Full disclosure: I don't fully understand all of the different
combinations of what the compiler, the runtime (glibc or mingw), and the
operating system try to configure by default, just learning as I go.

The first issue I was aware of was the JVM changing the FP control word
on GNU/Linux 32-bit systems. When Java was not loaded, Octave happily
used double extended precision by default. I added calls to reset the FP
control word to enforce double extended precision after every JVM call
returns control back to Octave. The working assumption here was that we
always want to use double extended precision regardless of CPU type.

Then in #40607 it was reported that certain int64 operations were losing
precision on Windows 32-bit systems. You made several improvements, the
overall effect is that for certain 64-bit integer operations the FP
control mode is set to double extended precision, the operation is done,
and then the FP control is set back to whatever it was before. The
assumption here seemed to be that we want to respect the runtime's
default setting and only set it to double extended for the operations
that we need.

Now in #48418 it was reported that certain floating point comparisons
were failing on Windows 32-bit systems by a few epsilon, but started
passing after any Java functions are called. Naturally I guessed that
this is related to the FP control being set to double extended after
Java calls. It seemed to me that programs running in 32-bit mode on
Windows don't use double extended precision by default, but maybe Octave
should, so I pushed a fix to enable it unconditionally when Octave
starts up.

So now it appears that we have a mish-mash of patches all touching the
FP control word in slightly different ways. When Octave starts, it is
set to double extended precision mode unconditionally. Any time we add
or multiply a 64-bit integer with a double, we set it to double extended
precision mode and then back to what it was (redundant now?). And any
time we call a Java function, we check the value and set it back to
double extended precision mode if it changed.

We still have a couple possibly-related bugs, that the same tests that
were failing on Windows with a 32-bit Octave also fail with a 64-bit
Octave, and the FP control word setting doesn't seem to affect it at all
in this case (bugs #48364 and #48365).

So should we do the bare minimum to ensure that the FP control word is
always set to double extended? Or should we only change it for certain
specific computations, and adjust failing tests accordingly where we
know the default precision may be less on certain systems than the tests
were previously written to assume?

Thanks for reading,

-- 
mike



reply via email to

[Prev in Thread] Current Thread [Next in Thread]