bug-gnulib
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [bug-gnulib] proposed simplification rewrite of stdint module


From: Bruno Haible
Subject: Re: [bug-gnulib] proposed simplification rewrite of stdint module
Date: Thu, 29 Jun 2006 15:07:58 +0200
User-agent: KMail/1.9.1

Hi Paul,

> Here's a proposed rewrite of the stdint module that attempts to
> simplify the module

Great job! Overall, I like it.

> The basic idea is to use macros instead of typedefs.

That simplifies some of the #ifs; the other part is done by assuming
current hardware with 16 or 32 or 64-bit registers.

> #if @HAVE_STDINT_H@
> # if defined __sgi && ! defined __c99 && __STDC_VERSION__ < 199901L

Where does the __STDC_VERSION__ test come from? AFAIK, the SGI
<stdint.h> spews the warning if and only if   ! defined __c99
If you use a different test here, it's possible that a "gcc --std=c99" on that
platform (which does not define __c99) falls on the bad <stdint.h>, no?

> #define _STDINT_MAX(signed, bits, zero) \

Clever!

>   ((signed) \
>    ? ~ _STDINT_MIN (signed, bits, zero) \
>    : (((zero) + 2) << ((bits) ? (bits) - 1 : 0)) - 1)

The last line gives an integer overflow, which some compilers may rightfully
complain about. I think it will give less warnings when written as

     : ((((zero) + 1) << ((bits) ? (bits) - 1 : 0)) - 1) * 2 + 1)

> #if LONG_MAX >> 31 >> 31 == 1

Nice trick to avoid an autoconf test! :-)

> #define int_fast8_t int8_t
> #define uint_fast8_t uint8_t
> #define int_fast16_t int16_t
> #define uint_fast16_t uint16_t
> #define int_fast32_t int32_t
> #define uint_fast32_t uint32_t

I don't agree with these choices. On SPARC, the use of 16-bit types for
computations or as a loop counter can really kill performance, because
gcc inserts two shift instructions between arithmetic operations, and
on 64-bit Alpha you similarly have added 'zap' instructions when passing
32-bit words as function arguments.

In summary, on 32-bit platforms, nowadays, 32-bit types are the safest
if you want speed. On 64-bit platform it's likely the 64-bit types, but
32-bit types are usually well supported too.

The only CPU I know for which a 16-bit loop counter is faster than a
32-bit loop counter is m68k; but it is not a CPU of "today" any more.

> #if @HAVE_LONG_LONG_INT@
> #define intmax_t long long int
> #define uintmax_t unsigned long long int
> #else
> #define intmax_t long int
> #define uintmax_t unsigned long int
> #endif

This is lacking the case for _MSC_VER. Why not use a "#ifdef int64_t"
here?

> #ifdef INT64_C
> # define INTMAX_C(x)   INT64_C(x)
> # define UINTMAX_C(x)  UINT64_C(x)

I think it's safer if this would use the same #if condition as the
definition of uintmax_t.

> #ifndef WCHAR_MIN
> #ifndef WCHAR_MAX
> #ifndef WINT_MIN
> #ifndef WINT_MAX

Why #ifndef here, and #undef for the others? I've seen a BSD system
a week ago, where wchar_t is a signed 32-bit type and WCHAR_MAX = 0x7fffffff
and WCHAR_MIN = 0 (!!). If you want an ISO C99 compliant <stdint.h>, you
need to #undef the original values.

>   dnl Check for long long int.
>   AC_REQUIRE([AC_TYPE_LONG_LONG_INT])

There are compilers which accept the "long long int" syntax, but for
which "long long int" is a 32-bit type. IMO this test also needs to
check that "long long int" is longer than 32 bits or longer than "long int".

>   if test $ac_cv_header_inttypes_h = yes; then
>   if test $ac_cv_header_sys_types_h = yes; then
>   if test $ac_cv_header_stdint_h = yes; then

Can you please add comments explaining where these variable values come from?
It's not obvious.

>        if eval test \"\$gl_cv_type_${gltype}_signed\" = yes; then

I got some errors when trying to use 'eval' in this way. (Sorry, I don't
remember the details.) Ended up using

    eval result=\$gl_cv_type_${gltype}_signed
    if test "$result" = yes; then

Simpler, less convoluted.

>   gl_cv_type_ptrdiff_t_signed=yes
>   gl_INTEGER_TYPE_SUFFIX([ptrdiff_t sig_atomic_t size_t wchar_t wint_t],
>     [gl_STDINT_INCLUDES])

For completeness, this should also set

    gl_cv_type_size_t_signed=no

Finally, two testsuite updates: Where it uses
    _STDINT_H_HAVE_INT64
it now should say
    _STDINT_H_HAVE_INT64 || defined int64_t
and similarly
    _STDINT_H_HAVE_UINT64
should become
    _STDINT_H_HAVE_UINT64 || defined uint64_t
And it now uses HAVE_WCHAR_T and HAVE_WINT_T, therefore modules/stdint-tests
should use m4/wchar_t.m4 and m4/wint_t.m4 and invoke these macros.

> #if @HAVE_INTTYPES_H@
>   /* In OpenBSD 3.8, <inttypes.h> includes <machine/types.h>, which defines
>      int{8,16,32,64}_t, uint{8,16,32,64}_t and __BIT_TYPES_DEFINED__.
>      <inttypes.h> also defines intptr_t and uintptr_t.  */
> # include <inttypes.h>
> #endif

In a few weeks, after gettext-0.15 is released, I'll also submit a rewrite
of the 'inttypes' module. The substitute <inttypes.h> includes <stdint.h>,
therefore at that moment we will have to change the
   # include <inttypes.h>
back to
   # include @FULL_PATH_INTTYPES_H@
again, to avoid circular #includes. (I already have this rewrite but I'm
holding it off until gettext-0.15 is released, otherwise we get conflicts
between gnulib-tool and gettextize.)

Bruno




reply via email to

[Prev in Thread] Current Thread [Next in Thread]