qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] [RFC] time: refactor QEMU timer to use GHRT


From: Edgar E. Iglesias
Subject: Re: [Qemu-devel] [PATCH 2/2] [RFC] time: refactor QEMU timer to use GHRTimer
Date: Tue, 23 Aug 2011 11:07:02 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Aug 23, 2011 at 10:12:05AM +0200, Paolo Bonzini wrote:
> On 08/22/2011 10:28 PM, Jan Kiszka wrote:
> >   - QEMU_CLOCK_VIRTUAL: Without -icount, same as above, but stops when
> >     the guest is stopped. The offset to compensate for stopped
> >     times is based on TSC, not sure why. With -icount, things get more
> >     complicated, Paolo had some nice explanations for the details.
> 
> The TSC is actually out of the picture.  However, it is easy to get 
> confused, because the same code handles stopping both the vm_clock and 
> the TSC when the guest is paused.  cpu_get_ticks handles the TSC, 
> cpu_get_clock handles the clock:
> 
> Removing some "uninteresting" details, you have:
> 
> /* return the host CPU cycle counter and handle stop/restart */
> int64_t cpu_get_ticks(void)
> {
>      if (!vm_running) {
>          return timers_state.cpu_ticks_offset;
>      } else {
>          int64_t ticks;
>          ticks = cpu_get_real_ticks();
>          return ticks + timers_state.cpu_ticks_offset;
>      }
> }
> 
> /* return the host CPU monotonic timer and handle stop/restart */
> static int64_t cpu_get_clock(void)
> {
>      int64_t ti;
>      if (!vm_running) {
>          return timers_state.cpu_clock_offset;
>      } else {
>          ti = get_clock();
>          return ti + timers_state.cpu_clock_offset;
>      }
> }
> 
> which use the same algorithm but with different base clocks (TSC vs. 
> CLOCK_MONOTONIC).
> 
> With -icount, things get indeed more complicated.  I'll cover only the 
> iothread case since all my attempts at understanding the non-iothread 
> case failed. :)  The "-icount N" vm_clock has nanosecond resolution just 
> like the normal vm_clock, but those are not real nanoseconds.  While the 
> CPU is running, each instruction increments the vm_clock by 2^N 
> nanoseconds (yes, this is completely bollocks for SMP. :).  When the CPU 
> is not running, instead, the vm_clock follows CLOCK_MONOTONIC; the 
> rationale is there in qemu-timer.c.
> 
> "-icount auto" is the same, except that we try to keep vm_clock roughly 
> the same as CLOCK_MONOTONIC by oscillating the clock frequency between 
> "-icount N" values that are more representative of the actual execution 
> frequency.
> 
> On top of this, the VM has to do two things in icount mode. The first is 
> to stop execution at the next vm_clock deadline, which means breaking 
> translation blocks after executing the appropriate number of 
> instructions; this is quite obvious.  The second is to stop execution of 
> a TB after any MMIO instruction, in order to recalculate deadlines if 
> necessary.  The latter is responsible for most of the icount black magic 
> spread all over the tree.  However, it's not that bad: long term, it 
> sounds at least plausible to reuse this machinery to run the CPU threads 
> outside the iothread lock (and only take it when doing MMIO, just like 
> KVM does).

Interesting idea :)

Cheers



reply via email to

[Prev in Thread] Current Thread [Next in Thread]