avr-gcc-list
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [avr-gcc-list] Re: Optimisation of bit set/clear in uint32_t


From: Alex Wenger
Subject: Re: [avr-gcc-list] Re: Optimisation of bit set/clear in uint32_t
Date: Wed, 22 Apr 2009 09:51:59 +0200
User-agent: Thunderbird 2.0.0.21 (Windows/20090302)

Hi,

>>> That's way too much code... Its fairly obvious where the optimisations
>>> should be, although I can see that some have been done already.
>>>
>> I can't see much possibility for improving the above code (except by  
>> removing the push and zeroing of r1).  You asked for "status" to be a  
>> volatile 32-bit int, so that's what you got.  The code below is  
>> semantically different, and thus compiles differently.
>>
> I don't believe it is semantically different. Which is one reason I
> raised this. I am using the version below in existing code and it
> behaves correctly.
> 
> Loading 4 registers and then storing back 3 that are unchanged makes no
> sense at all.
> 
> Where volatile comes in here is that the optimisation shouldn't use any
> previous loads or modify register values more than once without
> reloading/storing etc. Here, the value is loaded once, one byte is
> changed and then all 4 are stored. That's wasteful.

but that is what you demand with volatile. No read and no write
can be removed. On some Hardware there will be a special reaction
if you write a byte, so it would be semantical different when the
compiler removes it.

Maybe you can change it to use something like (not tested)

union status_union
{
  volatile uint32_t s;
  volatile uint8_t  b[4];
} status;


then you can operate on single volatile bytes with:

//Set Bit 17 in Status
status.b[1] |= _BV(2);

without writing the other bytes. An you can use status.s for
32-Bit Access.

Maybe it would be easyier to use 4 seperate status bytes, most
of the time it is better to think more 8-bitish if you write
Atmelcode.

-Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]