Am 24.09.2014 20:06, schrieb Daniel J Sebald:
On 09/24/2014 12:29 PM, Oliver Heimlich wrote:
Hello Jo, hello Dan,
Am 24.09.2014 17:47, schrieb Daniel J Sebald:
Would there be any utility to attempting to "integerize" the range,
if
possible, and thereby eliminate the accumulation of errors? For
example, [-2:0.1:0] is equivalent to [-20:1:0]*.1, so if the limits
turn
out to be factorable, then internally the range could be represented
slightly different than what the user types.
You try to make the colon operator context sensitive and require the
use
of decimal arithmetic. This is not a good idea. It would be a very
special behaviour that does not conform IEEE 754 and what an
experienced
user would expect. It would create even more unpredictable cases.
I suggest, that you (or any other user) prefers the linspace function
over the colon operator and use the colon operator carefully with the
binary floating point context in mind. For example instead of 0.1 you
can use the numbers 0.125 (=2^-3) or 0.09375 (=2^-4 + 2^-5), which
will
produce far less representational errors.
That's true, but for binary numbers. My point was that no matter the
number representation system, the underlying arithmatic logic unit
(ALU)
should have mathematical consistency. That is, if the ALU carries out
an operation, the result should be the equivalent of what is expected
in
mathematics, number representation aside. I'm wondering if there is
consistency in hardware architecture. It may not matter that "0.1"
(which actually equals 0.10000000000xxx) doesn't equal 1/10, so long
as
the following is true:
If you want to have mathematical consistency, which means no difference
between the internal representation of numbers, then (at least) two
problems arise: (1) You want to do decimal arithmetic and not binary
arithmetic. (2) You want to have infinite precision for your
(intermediate) results.
The first is possible, there are decimal arithmetic toolboxes/libraries
out there. They usually are slower than binary arithmetic (factor 4).
The second is a general problem since you will soon get problems with
infinite continued fractions and finite memory boundaries. However,
computer algebra systems can do a good job here.
IEEE 754 is a very popular standard and implemented both in software
and
hardware. As long as you are fine with binary floating point arithmetic
of finite precision (mostly 64 bit) you will see consinstency amongst
all standard compliant systems and get a decent performance.
octave-cli:1> rem(-2,.1) == 0
ans = 1
See the definition of rem(x,y): x - y .* fix (x ./ y)
The division exactly results in 20. The relative error of 0.1 is too
small and is lost. Then you multiply 0.1 with 20. Again, the result is
rounded and you get exactly what you wanted.
octave-cli:2> rem(0,.1) == 0
ans = 1
Any inaccuracy is lost when you divide 0 by anything.
Try it with some numbers that we know can be represented exactly with
base 10 or base 2:
octave-cli:2> (pi/pi) == 1.0
ans = 1
This is because x/x==1 with any x. I do not have to emphasize that both
πs are equal.
octave-cli:3> ((50*pi)/pi) == 50.0
ans = 1
octave-cli:4> ((pi*pi)/pi) == pi
ans = 1
Both 50 and the π constant are binary floating point numbers, so the
results may be exact. Additionally, the π constant's very last binary
digits are zero, so there is some protection against errors. Try the
following:
octave:1> x = pi + eps * 2;
octave:2> x * 50 / 50 == x
ans = 0