qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [Qemu-devel] [PATCH v3 0/4] target-ppc: Add FWNMI support


From: David Gibson
Subject: Re: [Qemu-ppc] [Qemu-devel] [PATCH v3 0/4] target-ppc: Add FWNMI support in qemu for powerKVM guests
Date: Thu, 3 Sep 2015 15:05:21 +1000
User-agent: Mutt/1.5.23 (2014-03-12)

On Thu, Sep 03, 2015 at 01:24:21PM +1000, Sam Bobroff wrote:
> On Thu, Sep 03, 2015 at 09:53:20AM +1000, David Gibson wrote:
> > On Wed, Sep 02, 2015 at 04:34:01PM +1000, Sam Bobroff wrote:
> > > On Tue, Sep 01, 2015 at 04:37:51PM +0530, Aravinda Prasad wrote:
> > > > 
> > > > 
> > > > On Monday 10 August 2015 09:35 AM, Sam Bobroff wrote:
> > > > > On Sun, Aug 09, 2015 at 03:53:02PM +0200, Alexander Graf wrote:
> > > > >>
> > > > >>
> > > > >> On 07.08.15 05:37, Sam Bobroff wrote:
> > [snip]
> > > > >>> (c) Assemble it (as above) but include it directly in the QEMU 
> > > > >>> binary by
> > > > >>> objcopying it in or hexdumping into a C string or something 
> > > > >>> similar. This seems
> > > > >>> fairly neat but I'm not sure how people would feel about including 
> > > > >>> "binaries"
> > > > >>> into QEMU this way.  Although it would take some work in the build 
> > > > >>> system, it
> > > > >>> seems like a fairly neat solution to me.
> > > > >>
> > > > >> We tried to move away from code as hex arrays in QEMU to make it 
> > > > >> easier
> > > > >> for people to patch things when they want to. But then again if we're
> > > > >> talking 3 instructions it might not be the worst option.
> > > > > 
> > > > > Sounds sensible.
> > > > > 
> > > > > So, in summary, it sounds like a decent approach would be:
> > > > > * store the guest's handlers in QEMU's spapr structure,
> > > > > * simplify the trampolines down to a single, non-returning, hcall,
> > > > 
> > > > However, other instructions such as saving r3 and re-trying hcall are
> > > > still required.
> > > 
> > > Ah yes, that's true. I was thinking that the retrying could happen inside 
> > > the
> > > hcall but it can't.
> > 
> > Sorry, I may have missed something here.  What does the code in the
> > vector need to retry?
> 
> It's due to having to handle simtaneous machine checks and there being a 
> single
> shared buffer for reporting the error. PAPR isn't very specific but here is
> what it says (from section 7.3.14):
> 
> Multiple processors of the same OS image may experi- ence fatal events at, or
> about, the same time. The first processor to enter the machine check handling
> firmware reports the fatal error. Subsequent processors serialize waiting for
> the first processor to issue the ibm,nmi-interlock call. These subsequent
> processors report “fatal error previously reported”. If, after the firmware
> makes a Machine Check call back, and before the OS issues the 
> ibm,nmi-interlock
> call, the same processor that is currently holding the storage containing the
> error log structure receives another Machine Check NMI, the firmware has no
> choice but to declare the condition fatal, log the result and execute the
> partition’s reboot policy.
> 
> So it needs to retry setting up the error buffer until it succeeds.

Hm.. so why can't the hypervisor code do the retrying?

> > Also, it looks like the vector will need at least one scratch register
> > (for the hcall number, if nothing else).  Does PAPR specify what SPRGs
> > the vector can clobber?  Obviously it can't be anything the guest
> > kernel uses.
> 
> PAPR only says SPRGs 0 to 3 are for software use, but the kernel (see
> arch/powerpc/include/asm/reg.h) defines SPRG2 as an exception scratch register
> so it should be the right one to use here.

Uh.. no.  If 0..3 are for software (i.e. OS) use, then this needs to
use a different one, since it's being used as a firmware resource
here.  Linux might treat SPRG2 as scratch, but another OS would be
within its rights to use it for something persistent.

Although, as paulus points out, sc 1 will clobber SRR0/1 anyway, and
if we use a special illegal instruction, then you no longer need a
scratch register.

> > Btw, does anyone know what happens with the VPA (and dispatch trace
> > log and so forth) on kexec() - it could be subject to the same stale
> > address problem, and rewriting vectors won't save us there.
> 
> I asked Michael Ellerman this one and he thinks kexec probably frees and
> re-allocates the VPA.

Ok.  So the question is: if an explicit deregister is good enough for
the VPA, is it also good enough for the FWNMI vector, in which case
doing it with just a qemu exit and not bouncing through the guest space
is back on the table.

I guess that's still problematic because there are existing guests
that assume a kexec() will magically wipe the fwnmi vectors away.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: pgpREklCeHmxB.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]