octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: tolerance in binopdf.m


From: Marco atzeri
Subject: Re: tolerance in binopdf.m
Date: Wed, 21 Sep 2011 17:01:03 +0200
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20110902 Thunderbird/6.0.2

On 9/21/2011 3:07 PM, Ben Abbott wrote:

On Sep 21, 2011, at 1:59 AM, Jordi Gutiérrez Hermoso wrote:

Just for fun, I asked someone to run this program on a Macintosh:

    #include<stdio.h>
    #include<math.h>

    int main()
    {
      printf("%0.100f\n",lgamma(3.0));
    }

It turns out it does run, i.e. it does have an lgamma implementation.
However, the exact value it outputs is 390207173010335/2^49, while on
my Debian system, the exact value it outputs is
31216573840826795/2^52. It seems that somewhere along the way, 3 bits
of precision were lost on the Macintosh.

So I don't think there's anything we can do other than increase the
tolerance by 3 bits to account for this. We already increase tolerance
slightly for other systems for other tests.

HTH,
- Jordi G. H.

Thanks for all the quick replies. From config.log I see ...

| #define HAVE_LGAMMA 1
| #define HAVE_LGAMMAF 1
| #define HAVE_LGAMMA_R 1
| #define HAVE_LGAMMAF_R 1

Thus, it looks to me as if Apple has a different implementation of lgamma (?).

As this is not a bug in Octave, I'm inclined to add a tolerance for MaOS.

However, I'm curious about what Apple did.

I'm using Xcode 4.1. The sources are below.

        http://www.opensource.apple.com/

Does anyone have an idea of where to look to find the sources for lgamma?

Ben


Could be here ?

http://opensource.apple.com/source/Libm/
http://opensource.apple.com/source/Libm/Libm-2026/Source/Intel/xmm_erfgamma.c

Regards
Marco



reply via email to

[Prev in Thread] Current Thread [Next in Thread]