qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] basic block tracing question


From: Peter Maydell
Subject: Re: [Qemu-devel] basic block tracing question
Date: Wed, 16 Mar 2016 20:52:38 +0000

On 16 March 2016 at 20:28, Tim Newsham <address@hidden> wrote:
> Hi,  I would like to create an accurate trace of basic blocks that get
> executed.  I'm interested in a trace of what a CPU would execute, and not
> for the purposes of studying qemu itself.
>
> I'm currently emitting trace data from cpu_tb_exec
> https://github.com/qemu/qemu/blob/master/cpu-exec.c#L136
> by printing out the env->eip (x86_64 only).  This seems to be roughly
> the right place -- there's already cpu tracing in this function.
> I do notice that some basic blocks get printed twice here though, and
> I tracked it down to basic blocks being rescheduled if execution returns
> with TB_EXIT_* flags set
> https://github.com/qemu/qemu/blob/master/cpu-exec.c#L163
> So I capture the PC before execution and only emit them if this is
> not the case, after execution.  This gets rid of the duplicate edges in
> the trace, but there is still one problem left that I don't understand!

If you only emit tracing information after the TB has executed and
returned then you will miss the case where we execute half a TB
and take an exception (eg load/store that page faulted, or system call),
because in that case we'll longjmp() out of the generated code. That's
one of the reasons why the tracing we have in upstream traces before
TB execution.

> Sometimes, when running the same program twice in a situation that
> should give the exact same trace, I see differences:
>
>  exec ffffffff8100450a
>  exec ffffffff81091130
> -exec ffffffff812f2930
> + basic block ffffff812f2930 returned with flag 3, setting pc to
> ffffffff812f285d
> +exec ffffffff812f285d
>  exec ffffffff812f293d
>  exec ffffffff81091142
>
> In this case the basic block wasn't merely restarted.  The PC was updated
> to a different value after the next_tb had the TB_EXIT_REQUESTED flag set.
> The particular basic block in question at ffffffff812f2930 ends with a callq
> to 0xffffffff812f2850 and then falls through to 0xffffffff812f293d.  So I
> would
> expect to see the "..2930" and "..293d" in the trace, but not the "..285d"
> in
> the trace, unless it was just continuing mid-basic block after the exit?

Firstly, are you running with -d nochain to disable QEMU's chaining
of TBs? (If not, then when we chain TBs together you'll only get
exec tracing for the first one, which is a good way to get confused.
The default tracing will tell you when we chain TBs together so you
can sort of unconfuse yourself, but it's easier to just turn it off
if you care about the TB logging.)

> What exactly is going on here.  What is the purpose of the TB_EXIT_REQUESTED
> here?

TB_EXIT_REQUESTED means "something asynchronous to execution requested
that we stop executing code". Usually this means "pending interrupt",
though some other things can cause it too. At the start of every TB
we check a flag to see if we need to stop; if the flag is set then
we drop out of generated code with the TB_EXIT_REQUESTED status
(and the main loop then takes care of identifying pending interrupts
or whatever it was that needed our attention.)

If you haven't disabled chaining of TBs, then we might drop out
before executing a chained TB; in this case we need to fix up
the CPU state to correctly represent the fact that we executed
the first TB in the chain but not the second one (or whatever).
This requires (among other things) setting the PC to the guest
address of the start of the TB we didn't execute.

(We may also exit mid-TB if icount is enabled and we're doing
exact instruction counting; in that case if we've said "execute
50 instructions" then we have to stop in the middle of a TB
when we hit the 50 instruction mark. icount isn't the default
though so unless your QEMU command line is enabling it then you
won't be hitting that; this is flag 2 (TB_EXIT_ICOUNT_EXPIRED).)

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]