bug-gnulib
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GCC optimizes integer overflow: bug or feature?


From: Paul Brook
Subject: Re: GCC optimizes integer overflow: bug or feature?
Date: Fri, 22 Dec 2006 02:57:14 +0000
User-agent: KMail/1.9.5

On Friday 22 December 2006 02:06, Robert Dewar wrote:
> Paul Brook wrote:
> > On Friday 22 December 2006 00:58, Denis Vlasenko wrote:
> >> On Tuesday 19 December 2006 23:39, Denis Vlasenko wrote:
> >>> There are a lot of 100.00% safe optimizations which gcc
> >>> can do. Value range propagation for bitwise operations, for one
> >>
> >> Or this, absolutely typical C code. i386 arch can compare
> >> 16 bits at a time here (luckily, no alighment worries on this arch):
> >>
> >> int f(char *p)
> >> {
> >>     if (p[0] == 1 && p[1] == 2) return 1;
> >>     return 0;
> >> }
> >
> > Definitely not 100% safe. p may point to memory that is sensitive to the
> > access width and/or number of accesses. (ie. memory mapped IO).
>
> A program that depends on this is plain wrong. There is no guarantee
> that memory references are as they appear in the program.  For a 
> non-volatile variable, any such optimization is valid. For instance
> if the flow can be used to prove that p[0] is already 1, then there
> is no need to repeat the read.

Who says the optimisation is valid? The language standard?

The example was given as something that's 100% safe to optimize. I'm 
disagreeing with that assertion. The use I describe isn't that unlikely if 
the code was written by someone with poor knowledge of C.

My point is that it's not that hard to invent plausible code that "breaks" 
when pretty much any transformation is applied. We have to decide how close 
to the standard we want to fly.

"Optimization should never change the behavior of any program accepted by the 
compiler" is not a useful constraint for an optimizing compiler. If program 
behavior includes the ability to debug the program, then I'd go as far as 
saying this should be the definition of -O0.

"Optimization should never change the behavior of a valid program" is useful 
definition because it forces you to define what constitutes a valid program.


There's actually a much better reason why the transformation is not safe. 
Consider a data structure where a byte of 1 indicates the end of the object. 
Under normal circumstances short-circuiting of the && operator prevents 
anything bad happening. If you merge the two accesses you've just read past 
the end of the object and all kinds of bad things may happen (eg. a 
segfault).

Paul

P.S. I think I'm repeating myself now, so this is the last time I intend to 
comment on this thread.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]