octave-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Octave-bug-tracker] [bug #54405] octave_idx_type index integer overflow


From: Dan Sebald
Subject: [Octave-bug-tracker] [bug #54405] octave_idx_type index integer overflow math check doesn't work correctly
Date: Sun, 29 Jul 2018 14:52:40 -0400 (EDT)
User-agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0

URL:
  <http://savannah.gnu.org/bugs/?54405>

                 Summary: octave_idx_type index integer overflow math check
doesn't work correctly
                 Project: GNU Octave
            Submitted by: sebald
            Submitted on: Sun 29 Jul 2018 06:52:38 PM UTC
                Category: Interpreter
                Severity: 3 - Normal
                Priority: 5 - Normal
              Item Group: Incorrect Result
                  Status: None
             Assigned to: None
         Originator Name: 
        Originator Email: 
             Open/Closed: Open
         Discussion Lock: Any
                 Release: dev
        Operating System: Any

    _______________________________________________________

Details:

The octave_idx_type is checked somewhere internally (I think) to make sure
that a user doesn't specify some matrix dimension for which the overall number
of elements is greater than

std::numeric_limits<octave_idx_type>::max ()

E.g., say number of rows NR is less than ::max() and number of columns NC is
less than ::max(), but maybe if one uses a x(:) to create a vector that has NR
* NC elements which is greater than ::max() in common algebra, but in 32-bit
or 64-bit integer math would create an overflow.

However, it doesn't appear that this check is quite correct.  In reference to
Bug #54100

https://savannah.gnu.org/bugs/?func=detailitem&item_id=54100

I put an overflow test in for the scenario where the user specifies a matrix
size where the octave_idx_type would overflow.  However, although that test
seems to catch the situation and call the error() function, it is the more
general error that appears.  Here's the output where I've printed out some
additional info using std::cerr:


octave:10> fid = fopen("zeros1000by61.dat","r"); tic; xt = fread (fid,
[9223372036854775807/3, 9223372036854775807/3], 'char'); toc; fclose(fid);
OIT MAX: 9223372036854775807
nr: 3074457345618258432  nc: 3074457345618258432
(std::numeric_limits<octave_idx_type>::max () / nr): 3
nr * nc: -8198552921648660480
input_buf_size: 1048576
error: out of memory or dimension too large for Octave's index type


To reiterate, I confirmed that the following overflow check I put in place is
caught:


    // Check for overflow.
    if (nr > 0 && nc > (std::numeric_limits<octave_idx_type>::max () / nr))
      error ("fread: dimension too large for Octave's index type");


but the following more general error from libinterp/parse-tree/pt-eval.cc is
appearing:


        catch (const std::bad_alloc&)
          {
            // FIXME: We want to use error_with_id here so that give users
            // control over this error message but error_with_id will
            // require some memory allocations.  Is there anything we can
            // do to make those more likely to succeed?

            error_with_id ("Octave:bad-alloc",
                           "out of memory or dimension too large for Octave's
index type");


This could just be a consequence of the try-catch block, i.e., there are two
errors, so the more broad error message is displayed.

This is an ancillary issue, but it seems to me the interpreter could be more
specific about whether it is an "out of memory" error or a "dimension too
large" error.  The "dimension too large" could be checked in a constructor,
where the std::bad_alloc could be reserved for just the system typical
std::bad_alloc.

Anyway, the bigger issue is that the overflow check doesn't seem quite right
with regard to large octave_idx_type that is in the unsigned integer range,
such as in the following example:


octave:9> fid = fopen("zeros1000by61.dat","r"); tic; xt = fread (fid,
[9223372036854775807, 9223372036854775807], 'char'); toc; fclose(fid);
OIT MAX: 9223372036854775807
nr: -9223372036854775808  nc: -9223372036854775808
(std::numeric_limits<octave_idx_type>::max () / nr): 0
nr * nc: 0
Elapsed time is 0.000128984 seconds.


I put in the octave_idx_type limit for the sizes:

std::numeric_limits<octave_idx_type>::max ()

and the overflow checks are not being caught; neither the one I added nor the
existing more global check.  I tried subtracting one or two from this value,
and the same situation arises.  Keep in mind that what is printed out with
"std::cerr << nr" where nr is octave_idx_type is not necessarily the same as
how the compiler is treating overflow check.  So the minimum values of signed
64-bit, i.e., -9223372036854775808 could be misleading.

In summary, I'm not quite sure what is happening, but the overflow check
doesn't seem quite right.

Indirectly related issues are:

https://savannah.gnu.org/bugs/?func=detailitem&item_id=54100
https://savannah.gnu.org/bugs/?func=detailitem&item_id=40812
https://savannah.gnu.org/bugs/?func=detailitem&item_id=47175






    _______________________________________________________

Reply to this item at:

  <http://savannah.gnu.org/bugs/?54405>

_______________________________________________
  Message sent via Savannah
  https://savannah.gnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]