|
From: | Dor Laor |
Subject: | Re: [Qemu-devel] [RESEND][PATCH 0/3] Fix guest time drift under heavy load. |
Date: | Wed, 05 Nov 2008 14:45:05 +0200 |
User-agent: | Thunderbird 2.0.0.16 (X11/20080723) |
Gleb Natapov wrote:
It is the same issue, just another scenario.On Fri, Oct 31, 2008 at 02:17:19PM -0500, Anthony Liguori wrote:Gleb Natapov wrote:Qemu device emulation for timers might be inaccurate and causes coalescing of several IRQs into one. It happens when the load on the host is high and the guest did not manage to ack the previous IRQ. The problem can be reproduced by copying of a big file or many small ones inside Windows guest. When you do that guest clock start to lag behind the host one. The first patch in the series changes qemu_irq subsystem to return IRQ delivery status information. If device is notified that IRQs where lost it can regenerate them as needed. The following two patches add IRQ regeneration to PIC and RTC devices.I don't think any of the problems raised when this was initially posted.So? I raise them now. Have you tried suggested scenario and was able to reproduce the problem? Gleb, can you please provide more details:Further, I don't think that always playing catch-up with interrupts is always the best course of action.Agree. Playing catch-up with interrupts is not always the best course of action. But sometimes there is no other choice.As I've said repeatedly in the past, any sort of time drift fixes needs to have a lot of data posted with it that is repeatable. How much does this improve things with Windows?The time drift is eliminated. If there is a spike in a load time may slow down, but after that it catches up (this happens only during very high loads though). - What's the host's kernel version exactly (including the high-res, dyn tick configured) - What's the windows version? Is it standard HAL (pit) or ACPI (rtc) or both? - The detailed scenario you use (example: I copied the entire c:/windows directory, etc) - Without the patch, what the time drift after x seconds on the host. - With the patch, is there a drift? Is there increased cpu consumption, etc Btw: I ack the whole thing, including the problem, the scenario and the solution. The first '1/3' was not received by my mailer. It will probably also drift with clock=pit in the guest kernel cmdline.How does having a high resolution timer in the host affect the problem to begin with?My test machine has relatively recent kernel that use high resolution timers for time keeping. Also the problem is that guest does not receive enough time to process injected interrupt. How hr timer can help here?How do Linux guests behave with this?Linux guests don't use pit or RTC for time keeping. They are completely unaffected by those patches. Even the Windows PV spec calls out three separate approaches to dealing with missed interrupts and provides an interface for the host to query the guest as to which one should be used. I don't think any solution that uses a single technique is going to be correct.That is what I found in Microsoft docs: If a virtual processor is unavailable for a sufficiently long period of time, a full timer period may be missed. In this case, the hypervisor uses one of two techniques. The first technique involves timer period modulation, in effect shortening the period until the timer “catches up”. If a significant number of timer signals have been missed, the hypervisor may be unable to compensate by using period modulation. In this case, some timer expiration signals may be skipped completely. For timers that are marked as lazy, the hypervisor uses a second technique for dealing with the situation in which a virtual processor is unavailable for a long period of time. In this case, the timer signal is deferred until this virtual processor is available. If it doesn’t become available until shortly before the next timer is due to expire, it is skipped entirely. The first techniques is what I am trying to introduce with this patch series. -- Gleb. |
[Prev in Thread] | Current Thread | [Next in Thread] |