qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH 3/9] pci: Make bounds checks on confi


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH 3/9] pci: Make bounds checks on config space accesses actually work
Date: Fri, 13 Jan 2012 03:23:37 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Jan 13, 2012 at 11:26:12AM +1100, David Gibson wrote:
> On Thu, Jan 12, 2012 at 03:32:32PM +0200, Michael S. Tsirkin wrote:
> > On Thu, Jan 12, 2012 at 04:46:22PM +1100, David Gibson wrote:
> > > The pci_host_config_{read,write}_common() functions perform PCI config
> > > accesses.  They take a limit parameter which they appear to be supposed
> > > to bounds check against, however the bounds checking logic, such as it is,
> > > is completely broken.
> > > 
> > > Currently, it takes the minimum of the supplied length and the remaining
> > > space in the region and passes that as the length to the underlying
> > > config_{read,write} function pointer.  This means that accesses which
> > > partially overrun the region will be silently truncated - which makes
> > > little sense.
> > 
> > Why does it make little sense? Makes sense to me.
> 
> Well, for starters a partial overrun would have to be an unaligned
> config space access, which is not supported by PCI.  The behaviour if
> you try is undefined on most bridges and unlikely to be a partial
> read/write (ignoring the low addr bits would be more likely).

Yes, bus level cycles have dword granularity in PCI.
But look e.g. at our express implementation.
Config is memory mapped, so we simply map this as memory into guest.
If you do a large read, what happens on real hardware?
I'm not sure, but it looks possible that it will get split
and multiple dword transactions generated on the bus.
OTOH our implementation passes the read on as is, so
it can get a multi-dword.
I'll try to play with some real systems to see what they do.

> Even if a bridge somehow did a partial access, it's still wrong for
> reads, since the overrunning bits would typically return 0xff not
> 0x00, and so you'd need to 0xff pad the result.

Or maybe an error needs to be generated. We really don't know,
it's up to the caller.

> There's just no point doing anything other than simply failing partial
> overruns.

There's no point in random code churn either.

> > >  Accesses which entirely overrun the region will *not*
> > > be blocked (an exploitable bug)
> > >, because in that case (limit - addr) will
> > > be negative and so the unsigned MIN will always return len instead.  Even
> > > if signed arithmetic was used, the config_{read,write} callback wouldn't
> > > know what to do with a negative len parameter.
> > 
> > The assumption was callers never pass in such values.
> 
> So, callers are to to treat this function, taking a limit parameter as
> having semantics of "checks a pointless edge case, but not the obvious
> type of overrun".  You think that's a sensible semantic for a general
> helper function?  Seriously?

I don't mind adding an extra check at this level. But
the comment would need to be reworded from
'fix a bug' to 'pseries wants to pass out of range
values so let's check this'.

> > Could you please give an example how this exploitable bug
> > can get triggered?
> 
> Ah, yes, it's not actually exploitable in the current code, since the
> only callers mask the address down.  It becomes exploitable if someone
> writes a new bridge which doesn't go via the existing accessors and
> assumes that the function which looks like it bounds checks actually
> bounds checks (which I'm about to do for the pseries PCI code.
> 
> Factoring the bounds checking into pci_host_config_{read,write} just
> makes more sense.
> 
> -- 
> David Gibson                  | I'll have my music baroque, and my code
> david AT gibson.dropbear.id.au        | minimalist, thank you.  NOT _the_ 
> _other_
>                               | _way_ _around_!
> http://www.ozlabs.org/~dgibson



reply via email to

[Prev in Thread] Current Thread [Next in Thread]