bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#8794: cons_to_long fixes; making 64-bit EMACS_INT the default


From: Eli Zaretskii
Subject: bug#8794: cons_to_long fixes; making 64-bit EMACS_INT the default
Date: Fri, 03 Jun 2011 13:52:50 +0300

> Date: Fri, 03 Jun 2011 01:43:36 -0700
> From: Paul Eggert <eggert@cs.ucla.edu>
> 
> I found several problems in the Emacs code that converts
> large C integers to Emacs conses-of-integers and back again.
> I wrote some code to fix them systematically, and found that
> it was simpler and more reliable if I could assume that EMACS_INT
> was 64-bit even on 32-bit hosts.  So here's a patch to do all that.

I don't think we agreed to make that the only configurations on 32-bit
machines.  Did we?

> +using that data type.  For most machines, the maximum buffer size
> +enforced by the data types is @math{2^61 - 2} bytes, or about 2 EiB.
> +For some older machines, the maximum is @math{2^29 - 2} bytes, or
> +about 512 MiB.  Buffer sizes are also limited by the size of Emacs's
> +virtual memory.

Can 32-bit hosts really support buffers and strings larger than 2GB,
even if EMACS_INT is a 64-bit type?  I thought the largest object on a
32-but machine cannot exceed 2GB due to pointer arithmetics, which
will wrap around after that.  What am I missing?

>  Emacs cannot visit files that are larger than the maximum Emacs buffer
> -size, which is around 512 megabytes on 32-bit machines
> +size, which is around 512 MiB on 32-bit machines and 2 EiB on 64-bit machines
>  (@pxref{Buffers}).  If you try, Emacs will display an error message
>  saying that the maximum buffer size has been exceeded.

This seems to contradict what you said about buffers, doesn't it?

> === modified file 'src/data.c'
> --- src/data.c        2011-05-31 14:57:53 +0000
> +++ src/data.c        2011-06-02 07:38:44 +0000
> @@ -23,8 +23,6 @@
> [...]
> +  else if (FLOATP (c))
> +    {
> +      double d = XFLOAT_DATA (c);
> +      if (0 <= d
> +       && d < (max == UINTMAX_MAX ? (double) UINTMAX_MAX + 1 : max + 1))
> +     {
> +       val = d;
> +       valid = 1;
> +     }
> +    }
> +  else if (CONSP (c))
> +    {
> +      Lisp_Object top = XCAR (c);
> +      Lisp_Object bot = XCDR (c);
> +      if (CONSP (bot))
> +     bot = XCAR (bot);
> +      if (NATNUMP (top) && XFASTINT (top) <= UINTMAX_MAX >> 16 && NATNUMP 
> (bot))
> +     {
> +       uintmax_t utop = XFASTINT (top);
> +       val = (utop << 16) | XFASTINT (bot);
> +       valid = 1;
> +     }
> +    }

The *_MAX macros need limits.h, but I don't see it being included by
data.c.  Did I miss something?

> +#define INTEGER_TO_CONS(i)                                       \
> +  (! FIXNUM_OVERFLOW_P (i)                                       \
> +   ? make_number (i)                                             \
> +   : ! ((FIXNUM_OVERFLOW_P (INTMAX_MIN >> 16)                            \
> +      || FIXNUM_OVERFLOW_P (UINTMAX_MAX >> 16))                  \
> +     && FIXNUM_OVERFLOW_P ((i) >> 16))                           \
> +   ? Fcons (make_number ((i) >> 16), make_number ((i) & 0xffff))    \
> +   : make_float (i))

Same here (this is from lisp.h).  But since every C file includes
lisp.h, it looks like we need to include limits.h in lisp.h.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]