avr-gcc-list
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [avr-gcc-list] char to int promotion in bitwise operators


From: Ruud Vlaming
Subject: Re: [avr-gcc-list] char to int promotion in bitwise operators
Date: Fri, 21 Aug 2009 22:00:56 +0200
User-agent: KMail/1.9.1

I think it is about time that the compiler is extended with
a possibility to work with 8 bit integers in a native way. 
(like it was extended to read the 0b101010 format)
If gcc can calculate with integers of 16, 32 and 64 bits wide
why not 8, the mother of all integers?

A second best would be a set of macro's handeling all
operators, but this would not improve readability. But of course
we must first have a complete list of all affected operators and
circumstances. 

Until that time i stick with -mint8. Not being ideal, but at least
the people who made that up understood the needs of 
microcontroller programmers.

Ruud.



On Friday 21 August 2009 00:27, Andrew Zabolotny wrote:
> Hello!
> 
> I've been looking at the compiled .S file for my library and have
> noticed that the code is substantially larger than could be because
> every my check for bitflags, like in this example:
> 
> ------------------
> #define FLAG_A 0x01
> #define FLAG_B 0x02
> 
> ...
> 
> if (flags & FLAG_A) ...
> if (flags & FLAG_B) ...
> ------------------
> 
> gcc will extend both 'flags' and FLAG_A to a 16-bit integer type and
> then do a 16-bit 'and' and compare. I recalled that I've seen something
> about this in avr-libc documentation, and indeed I found the section:
> 
> 11.21 Why does the compiler compile an 8-bit operation that uses
>       bitwise operators into a 16-bit operation in assembly?
> 
> However, the solution in that section does not help here anyway. If I
> modify those if's to look like:
> 
> ------------------
> if (((unsigned char)flags) & ((unsigned char)FLAG_A) ...
> ------------------
> 
> gcc will anyway expand both operands to int first :-( Even the
> following cumbersome example will result in everything being expanded
> (and even compared!) as 16-bit integers:
> 
> ------------------
> if ((unsigned char)(((unsigned char)a) & ((unsigned char)FLAG)) !=
> (unsigned char)0) ...
> ------------------
> 
> Is there a way to force the compiler to use 8-bit integers for bitwise
> operators? The only way I found so far is to use the -mint8 switch, but
> libc docs says that this option is not really supported by libc.
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]