On Sep 21, 2011, at 1:59 AM, Jordi Gutiérrez Hermoso wrote:
Just for fun, I asked someone to run this program on a Macintosh:
#include<stdio.h>
#include<math.h>
int main()
{
printf("%0.100f\n",lgamma(3.0));
}
It turns out it does run, i.e. it does have an lgamma implementation.
However, the exact value it outputs is 390207173010335/2^49, while on
my Debian system, the exact value it outputs is
31216573840826795/2^52. It seems that somewhere along the way, 3 bits
of precision were lost on the Macintosh.
So I don't think there's anything we can do other than increase the
tolerance by 3 bits to account for this. We already increase tolerance
slightly for other systems for other tests.
HTH,
- Jordi G. H.
Thanks for all the quick replies. From config.log I see ...
| #define HAVE_LGAMMA 1
| #define HAVE_LGAMMAF 1
| #define HAVE_LGAMMA_R 1
| #define HAVE_LGAMMAF_R 1
Thus, it looks to me as if Apple has a different implementation of lgamma (?).
As this is not a bug in Octave, I'm inclined to add a tolerance for MaOS.
However, I'm curious about what Apple did.
I'm using Xcode 4.1. The sources are below.
http://www.opensource.apple.com/
Does anyone have an idea of where to look to find the sources for lgamma?
Ben