qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] How do -icount flags work in QEMU TCG


From: Arnabjyoti Kalita
Subject: Re: [Qemu-discuss] How do -icount flags work in QEMU TCG
Date: Thu, 22 Mar 2018 18:34:46 -0400

>From what I can see from the logs, it is quite hard to tell why this
occurs. I am afraid I might have to disagree with your point 2. If it was
an MMU page fault, one of the TCG blocks would have started executing the
page fault handlers already - which I do not see in the TCG execution flow
yet and a page fault in the kernel would anyway be dangerous. ( I am not
aware of any other scenarios of MMU faults in the guest though ).

It is a bit likely that the -icount value probably ran out, much more
likely is that one of the loads/stores could be to an emulated device as
you explained. But atleast in the translation phase, the icount values
correctly count the number of instructions in the TCG block, even for cases
like the ones I described previously. If things go wrong in the final host
code execution phase, the code jumping to the middle of the TB could
happen.

I see this pattern quite irregularly across other Translation Blocks as
well (not many times, but they are scattered around).

I would have to take this irregularity into consideration with the use
of *-icount
*as I try to analyze the execution flow.

On Thu, Mar 22, 2018 at 7:20 AM, Peter Maydell <address@hidden>
wrote:

> On 21 March 2018 at 20:22, Arnabjyoti Kalita <address@hidden>
> wrote:
> > I see that in the trace file, some of the TCG blocks seem to be
> translated
> > more than once -
> >
> > ( NOTE: I am not using the TB Cache/Hash Table and have managed to
> disable
> > it in the QEMU code )
> >
> > IN:
> > 0xffffffff81061fd0:             nopl     (%rax, %rax)
> > 0xffffffff81061fd5:             pushq    %rbp
> > 0xffffffff81061fd6:             movq     0x10a00fb(%rip), %rax
> > 0xffffffff81061fdd:             movq     %rsp, %rbp
> > 0xffffffff81061fe0:             movl     0xf0(%rax), %eax
> > 0xffffffff81061fe6:             movl     %eax, %eax
> > 0xffffffff81061fe8:             popq     %rbp
> > 0xffffffff81061fe9:             retq
> >
> > ----------------
> > IN:
> > 0xffffffff81061fe0:             movl     0xf0(%rax), %eax
> > 0xffffffff81061fe6:             movl     %eax, %eax
> > 0xffffffff81061fe8:             popq     %rbp
> > 0xffffffff81061fe9:             retq
> >
> > ----------------
> > IN:
> > 0xffffffff81061fe0:             movl     0xf0(%rax), %eax
> >
> >
> > ----------------
> > IN:
> > 0xffffffff81061fe6:             movl     %eax, %eax
> > 0xffffffff81061fe8:             popq     %rbp
> > 0xffffffff81061fe9:             retq
> >
> > The above example shows one TCG block that has been translated 4 times.
> > Does this mean the execution gets interrupted in between ? Atleast the
> > translation looked to be complete.
>
> We only ever enter TBs from the top, so if code jumps to or restarts
> at a PC value partway through an existing TB then we will treat
> it as a new TB starting at whatever that PC value is. That's what's
> happened here.
>
> I can't say from the logs how this has happened, but possibilities:
>  * the icount count ran out partway through a second or later
>    reexecution of the block and so we effectively split it in two
>  * we took a guest MMU fault on one of the memory accesses,
>    which was handled and resulted in execution resuming on the
>    memory access insn
>  * one of the loads or stores was to an emulated device. icount
>    only ever does IO on the final insn of a TB. We translate code
>    assuming loads/stores aren't IO, and then if at runtime we
>    find that one is to an IO device, we stop execution of the TB
>    there, and generate a stub TB with just the IO insn. (I think
>    that's how it works, anyway.)
>  * the guest code execution flow jumped into an insn that happened
>    to be in the middle of this TB
>
> thanks
> -- PMM
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]