octave-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Octave-bug-tracker] [bug #53683] numerical output format for float valu


From: Dan Sebald
Subject: [Octave-bug-tracker] [bug #53683] numerical output format for float values with magnitude greater than precision is wrong
Date: Tue, 17 Apr 2018 21:09:28 -0400 (EDT)
User-agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0

Follow-up Comment #1, bug #53683 (project octave):

Here's a changeset that will address this bug, but the fix may not be 100%
accurate as to what the intended behavior is.  It procudes


octave:1> for i=-2:8
>   10^i / 3
> endfor
ans =  0.0033333
ans =  0.033333
ans =  0.33333
ans =  3.3333
ans =  33.333
ans =  333.33
ans =  3333.3
ans =   33333
ans =  333333
ans = 3333333
ans = 33333333


which shows how the right side digits is eventually discarded.  Note in the
changeset that I actually chose the ld and rd to reflect the scientific or
engineering understanding of "precision", not the computer/C interpretation. 
Here's the formula change:


       if (digits > 0)
         {
-          ld = digits;
-          rd = (prec > digits ? prec - digits : prec);
+          ld = (prec > digits ? digits : prec);
+          rd = (prec > digits ? prec - digits : 0);
         }


where my expectation was that ld limited to prec (i.e., 5) would create

ans = 33333000

rather than

ans = 33333333

as an example; but obviously it doesn't do that.  In some sense, what I
intended could be a desirable behavior for those who want to present data in a
paper or something and limit the precision to, say, three digits without
having to do the rounding manually.  Do something manually always opens the
door for mistakes so why not let the computer do the rounding?

Nonetheless, I suspect the that ld precision is interpreted in a C sense at
the following point in the code:


template <typename T>
std::ostream&
operator << (std::ostream& os, const pr_formatted_float<T>& pff)
{
  octave::preserve_stream_state stream_state (os);

  float_format real_fmt = pff.m_ff;

  if (real_fmt.fw >= 0)
    os << std::setw (real_fmt.fw);

  if (real_fmt.prec >= 0)
    os << std::setprecision (real_fmt.prec);

  os.flags (static_cast<std::ios::fmtflags>
            (real_fmt.fmt | real_fmt.up | real_fmt.sp));

  os << pff.m_val;

  return os;
}


So, there are a few questions, the most primary being what the output should
do in terms of "precision".  It might be good to create a set of tests with
perhaps fifty examples of all the various formats...in string format, then
just output the result to a string and compare.  If the attached change is
small enough and fixes an obvious bug, then apply as it is, but in the long
run this may need a more extensive review.

(file #43963)
    _______________________________________________________

Additional Item Attachment:

File name: octave-precision_right_side_digits-djs2018apr17.patch Size:1 KB


    _______________________________________________________

Reply to this item at:

  <http://savannah.gnu.org/bugs/?53683>

_______________________________________________
  Message sent via/by Savannah
  http://savannah.gnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]