qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Memory API bugfix - abolish addrrrange_end()


From: David Gibson
Subject: Re: [Qemu-devel] [PATCH] Memory API bugfix - abolish addrrrange_end()
Date: Mon, 17 Oct 2011 16:31:53 +1100
User-agent: Mutt/1.5.21 (2010-09-15)

On Sun, Oct 16, 2011 at 02:35:37PM +0200, Avi Kivity wrote:
> On 10/16/2011 01:40 PM, David Gibson wrote:
> > > Let me see if I can work up a synthetic int128 type.
> >
> > So.. you think replacing every single basic arithmetic operations with
> > calls to implement the synthetic type, _and_ imposing the resulting
> > overhead is _less_ ugly than some slightly fiddly re-ordering of
> > operations?  Seriously?
> >
> 
> In terms of how the code looks, it's seriously more ugly (see the
> patches I sent out).  Conceptually it's cleaner, since we're not dodging
> the issue that we need to deal with a full 64-bit domain.

We don't have to dodge that issue.  I know how to remove the
requirement for intermediate negative values, I just haven't made up a
patch yet.  With that we can change to uint64 and cover the full 64
bit range.  In fact I think I can make it so that size==0 represents
size=2^64 and even handle the full 64-bit, inclusive range properly.

> But my main concern is maintainability.  The 64-bit blanket is to short,
> if we keep pulling it in various directions we'll just expose ourselves
> in new ways.

Nonsense, dealing with full X-bit range calculations in X-bit types is
a fairly standard problem.  The kernel does it in VMA handling for
one.  It just requires thinking about overflow cases.

> The overhead is negligible.  This code comes nowhere near any fast path.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson



reply via email to

[Prev in Thread] Current Thread [Next in Thread]