octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: odd behaviour with 2x2 matrix floating-point multiplication


From: Jordi Gutiérrez Hermoso
Subject: Re: odd behaviour with 2x2 matrix floating-point multiplication
Date: Tue, 16 Dec 2014 11:05:09 -0500

On Tue, 2014-12-16 at 06:37 +0100, Marco Atzeri wrote:
> additional, for both blas's using the 64 bit versions:
>
> ii =  10000
> count_err =
> 
>     0
>     0
> 
> ans =
> 
>     0
>     0

This seems like a dead-giveaway that some of the BLASes is using
shorter x86 registers, or perhaps is splitting 64-bit doubles to fit
into 32-bit registers. The x86 FPU uses 80 bits for its internal
registers, but values in memory are stored as 64-bit doubles. When
moving values back and forth, precision could be getting lost.

What does the following C++ program output when you compile it with
32-bit and 64-bit MinGW?

    #include <iostream>

    int main(){
      double eps = 1.0;
      while (1 + eps > 1) {
        eps /= 2;
      } 
      std::cout << "Machine epsilon is " << 2*eps << std::endl;
    }

Try also with and without -O2 in each. If in some configuration you're
results different than 2.22045e-16, then in that configuration you're
probably using the 80-bit x86 registers. Amazingly, using a register
that can store a bigger float can actually result in loss of precision!

HOWEVER, Jo, if you are relying on comparing a floating point
computation to exact zeros, YOU STILL HAVE A BUG. The "solution" is
not to compile your BLAS to not use the 80-bit registers of x86,
because that can have other undesirable effects, such as slower
computations.

- Jordi G. H.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]