qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] kvmclock: clarify usage of cpu_clean_all_dirty


From: Marcelo Tosatti
Subject: Re: [Qemu-devel] [PATCH] kvmclock: clarify usage of cpu_clean_all_dirty
Date: Tue, 16 Sep 2014 15:10:21 -0300
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Sep 16, 2014 at 06:22:15PM +0200, Paolo Bonzini wrote:
> Il 16/09/2014 18:07, Marcelo Tosatti ha scritto:
> >> > The cpu_synchronize_all_states() call in kvmclock_vm_state_change() is
> >> > needed to make env->tsc up to date with the value on the source, right?
> > Its there to make sure the pair
> > 
> > env->tsc, s->clock = data.clock
> > 
> > are relative to point close in time.
> 
> Ok.  But why are they not close in time?
> 
> Could we have the opposite situation where env->tsc is loaded a long
> time _after_ s->clock, and something breaks?
> 
> Also, there is no reason to do kvmclock_current_nsec() during normal
> execution of the VM.  Is the s->clock sent by the source ever good
> across migration, and could the source send kvmclock_current_nsec()
> instead of whatever KVM_GET_CLOCK returns?

guest clock read = pvclock.system_timestamp + (rdtsc() - pvclock.tsc)

Host kernel updates pvclock.system_timestamp in certain situations,
such as guest initialization. With master clock scheme,
pvclock.system_timestamp is only updated on guest initialization.

In case TSC runs faster than the host system clock, you cannot do 
the following on destination:

pvclock.system_timestamp = ioctl(KVM_GET_CLOCK)
povclock.tsc = rdtsc

guest clock read = pvclock.system_timestampOLD + (rdtsc() - pvclock.tsc)

Because the effective clock was not pvclock.system_timestamp but the
TSC, which is running at a higher frequency. If you do that, the time 
goes backward.

Q: could the source send kvmclock_current_nsec() 
instead of whatever KVM_GET_CLOCK returns?

Well no because there are other users of KVM_GET_CLOCK such as 
Hyper-V clock.

> I don't understand this code very well, but it seems to me that the
> migration handling and VM state change handler are mixed up...

Again, suggestions are welcome.

> 
> Paolo
> 
> >> > But if the synchronize_all_states+clean_all_dirty pair runs on the
> >> > source, why is the cpu_synchronize_all_states() call in
> >> > qemu_savevm_state_complete() not enough?  It runs even later than
> >> > kvmclock_vm_state_change.
> > Because of the "pair of time values" explanation above.
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]