qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [Qemu-devel] [PATCH] ppc: Stop dumping state on all excep


From: Benjamin Herrenschmidt
Subject: Re: [Qemu-ppc] [Qemu-devel] [PATCH] ppc: Stop dumping state on all exceptions in linux-user
Date: Sun, 07 Aug 2016 10:50:16 +1000

On Sat, 2016-08-06 at 15:23 +0530, Richard Henderson wrote:
> On 08/03/2016 05:09 PM, Benjamin Herrenschmidt wrote:
> > 
> > As far user-with-softmmu, I'm not too sure... softmmu significantly
> > increases the overhead of load and stores. Maybe after we add 128-bit
> > integers to TGC to alleviate that a bit ? :-)
> 
> It wouldn't be mandatory, but there are certain bugs we can't fix without it. 
> The big issues to be fixed with softmmu are
> 
> (1) Host page size > guest page size.
> 
> E.g. there are many programs (i386, sparc, etc, all with 4k pages) that you 
> can't even load, much less run, on a ppc64 host using a 64k page size.

Can't we advertise the host page size to the guest ? Or there are too many
compiled-in assumptions ?

> > (2) Host virtual address space bits != guest virtual address space bits
> 
> My alpha emulation has run into this.  A real Alpha guest has a 44-bit 
> address 
> space, but an x86_64 host has a 48-bit address space.  The x86_64 kernel 
> cannot 
> be persuaded to reliably map memory below (1ul << 44), so I have to pretend 
> than Alpha has a 48-bit address space.  (Indeed, I set this to 63 bits, so 
> that 
> it works for even wider va, like on ppc64 and sparc64.)

You can't just set a no-access VMA covering the top of the address space ? Are
alpha programs relying on the fact that they won't get addresses above 44 ?

> > More theoretically, if the guest uses high bits for some purpose (e.g. ia64 
> segmentation in the top 3 bits), and the host doesn't have a full 64-bit 
> virtual address space, then we cannot even map the program, since we cannot 
> set 
> bits 61-63 to non-zero values.

I see. We could definitely have the option then.

Cheers,
Ben.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]