axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Axiom-developer] [#167 Infinite Floats Domain] Float, Real, RealClosure


From: wyscc
Subject: [Axiom-developer] [#167 Infinite Floats Domain] Float, Real, RealClosure
Date: Tue, 14 Jun 2005 19:30:31 -0500

Changes 
http://page.axiom-developer.org/zope/mathaction/167InfiniteFloatsDomain/diff
--

++added:

<hr>
>From William Sit, Tuesday June 14 20:06:00 -4:00<br>
Subject: Float, Real, RealClosure

Tim wrote:

>This raises the same kind of implementation issue that indefinite computation 
>raises except that indefinites are symbolic and infinite precision floats are 
>not.

But that is a very big difference. For example, Axiom using 'FLOAT' can 
simulate arbitrary precision with automatic recomputation right now. All one 
has to do is instead of storing the value of any identifier in a fixed floating 
point representation, store it as a function (or macro) to compute that value. 
This is exactly the idea used by Michael Stoll. In Mathematica, it is using 
'SetDelayed' instead of 'Set' and in Axiom, it is using '==' instead of ':='.  
The defining expression (function) is stored with the identifier, but no 
composition is actually stored. Instead, when evaluation time comes, the values 
of the identifiers involved are recursively computed (the actual implementation 
may be more clever and efficient, but this is discussion on the conceptual 
level). Evaluation is always possible as long as all the identifiers at the 
leaves are. 

However, in my view, this still is not infinite precision in the sense of that 
$2^(-35) + 2^34$ will not compute exactly because the system does not 
*automatically* increase precision in 'FLOAT' (it won't, even using macros).  
But one *can* compute it by *manually* increasing the precision (which 
precision is by no means obvious). Compare this with 'FRAC INT' where there is 
no need to manually increase the length of integers in numerators or 
denominators. Floating point systems are designed to compute only significant 
digits.
(More comments on implementing infinite precision below).

\begin{axiom}
)clear all
bits(68)$Float
x == 2^(-34)::Float
y == 2^(35)::Float
z == y+x
x
y
z
bits(200)$Float
z
bits(68)$Float
2^(-34)+2^35
\end{axiom}

  In contrast, evaluation is not always possible for indefinites (a topic to be 
discussed separately). It is difficult to evaluate indefinites since that 
results in the recursive composition of functions (technique involved bears 
similarity to code optimization by compiler, but unfortunately, we do not just 
want the code, but a mathematically readable expression). The indefinites *are* 
the functions themselves, *not* their functional values. The *output form* of 
an indefinite is either the defining expression (typically useless, since no 
computation is performed), or recursively composed (typically a mess), or 
recursively analyzed into cases (proviso-attached expressions, and extremely 
computationally demanding to simplify provisos). When an indefinite is involved 
in a loop control construct, it is difficult to "simplify". None of these 
difficulties are present in infinite precision 'FLOAT', however it is to be 
implemented.

Bill Page wrote:

> So %pi already has this kind of "closure" built-in.
> Is it really possible to do this more generally for
> all possible computations with real numbers?

Yes.

\begin{axiom}
bits(68)$Float
x == sin(1/sqrt(2)::Float)
x
bits(100)$Float
x
\end{axiom}

> But surely there is an isomorphism between the domain of
> **infinite precision** floating point numbers and the domain
> of rationals, no?

If by such a domain, you meant a floating point representation where the 
mantissa has infinite precision (not arbitrary precision), then you might as 
well use infinite sequences of rational numbers in decimal (or binary) 
expansion, and you could represent, in theory, $\sqrt{2}$ and perform exact 
arithmetic like $1/3 + 1/5$. Since the set of rational numbers is dense in the 
set of real numbers, and FRAC INT is the field of rational numbers, rational 
number sequences are better for approximating real numbers, and of course, they 
would be exact on rationals (unlike FPS).
In that case the isomorphism would be with the field of real numbers. 

But this is of course not what floating point systems are. In FPS, you may be 
capturing more and more rational numbers as the precision is increased, but the 
gaps between two consecutive 'FLOAT' numbers remain large at higher exponents 
(they amplify the gaps). Also, not every rational number has a finite decimal 
(or binary) expansion (1/3 would not be representable in an arbitrary precision 
FPS with base 10). So any rational with a recurring fractional 
(non-terminating) expansion in the base will not be representable and there 
cannot be a bijection (not to mention an isomorphism since floating point 
operations don't respect even the associative law). 

Another way of stating the difference is that IPFPS requires the limiting value 
of mantissa (and associated exponent to locate the fractional point), which 
does not exist in APFPS (which I view as the infinite union of $k$-precision 
FPS over all positive $k$; your view may be different). There are problems with 
simulating IPFPS by automatically adjusting the precision to yield exact 
results on floating point computation. The resulting system still cannot 
perform exact arithmetic on all rational (therefore, real) numbers due to 
truncation and round off. Whether the error term tends to zero would depend on 
the particular computation involved (problem may be ill-conditioned). We also 
need algorithms to automatically adjust the precision,  worry about efficiency 
because of repeated recalculation to increase precision, and perhaps worry 
about termination. There is also an intrinsic problem in the representation.

Consider the problem of extending the precision of two FP numbers, which have 
identical FP representation at 10-bit precision. To avoid truncation errors due 
to binary to decimal conversion, we will work with internal representations of 
'FLOAT' only.

\begin{axiom}
z == sqrt(3)::Float
bits(10)$Float
mz := mantissa z
ez := exponent z
z10 := mz*2^ez
z10float == z10::Float
mantissa z10float
exponent z10float
mantissa(z-z10float)
\end{axiom}

So here we have two identical 10-bit floating point numbers, both defined as 
macros. If we extend the precision, they would no longer be the same.

\begin{axiom}
bits(20)$Float
mantissa z
mantissa z10float
\end{axiom}

This raises some questions. It showed that the current representation of 
'FLOAT' is not equipped to handle "infinite precision" floating point 
computation, whatever that may be, and indeed, not even arbitrary precision, 
since extending the precision of a number depends not just on the number, but 
on the functions used to define it. An accompanying question is how to test 
equality. Similarly,
if we are to find the squares of 'z' and 'z10float', what precisions should be 
set for the proposed "infinite precision" FPS? 

There should be a lot of similarities between IPFP and power series because we 
should treat IPFP numbers as infinite sequences, not floats. In power series, 
we start off with the infinite, compute with the infinite (lazy evaluation), 
but display finitely. In 'FLOAT', we start with the finite and extend, and 
there is no unique way to do so.  In using 'FLOAT' to simulate IPFP, we start 
with the infinite when we use macros, compute either finitely (':=') or 
infinitely ('==', the equivalent of lazy evaluation) and display finitely; but 
the objects are not correctly stored in the data representation. Just as we 
don't simulate power series using polynomials, we shouldn't simulate IPFP using 
FPS. This *is* Axiom, afterall.



> Maybe these **computable reals** are something else? Isn't
> it related to the RealClosure as already implemented in
> Axiom?

APFPS  is not the same as RealClosure. 

For some background material, see Lang: Algebra.  Here I recall a few basic 
definitions. A field $K$ is a *real field* if $-1$ is not the sum of squares in 
$K$.  A field $K$ is *real closed* if $K$ is real, and any algebraic extension 
of $K$ that is real must be $K$ itself.  Every real closed field has a unique 
ordering. The *real closure* of a real field $K$ is an algebraic extension of 
$K$ that is real closed. Every real field $K$ has a real closure $L$, and if 
$K$ is ordered, $L$ is unique (up to a unique isomorphism). In $L$, every 
positive element is a square and the algebraic closure of $L$ is $L[\sqrt{-1}]$.

RealClosure implements computation in a real closure of a base field. In 
particular, it does not compute any element that is transcendental over the 
base field. RealClosure provides two modes of computation, one in terms of 
minimal polynomials and the other, an interval containing the particular root 
is used for approximation. Its relation with 'FLOAT', I believe, would be for 
solving polynomial equations numerically.

--
forwarded from http://page.axiom-developer.org/zope/mathaction/address@hidden




reply via email to

[Prev in Thread] Current Thread [Next in Thread]