qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 3/5] hpet 'driftfix': add fields to HPETTimer


From: Glauber Costa
Subject: Re: [Qemu-devel] [PATCH v2 3/5] hpet 'driftfix': add fields to HPETTimer and VMStateDescription
Date: Mon, 11 Apr 2011 10:57:48 -0300

On Mon, 2011-04-11 at 08:47 -0500, Anthony Liguori wrote:
> On 04/11/2011 08:39 AM, Glauber Costa wrote:
> > On Mon, 2011-04-11 at 08:10 -0500, Anthony Liguori wrote:
> >> On 04/11/2011 04:08 AM, Avi Kivity wrote:
> >>> On 04/11/2011 12:06 PM, Ulrich Obergfell wrote:
> >>>>>>   vmstate_hpet_timer = {
> >>>>>>             VMSTATE_UINT64(fsb, HPETTimer),
> >>>>>>             VMSTATE_UINT64(period, HPETTimer),
> >>>>>>             VMSTATE_UINT8(wrap_flag, HPETTimer),
> >>>>>>   + VMSTATE_UINT64_V(saved_period, HPETTimer, 3),
> >>>>>>   + VMSTATE_UINT64_V(ticks_not_accounted, HPETTimer, 3),
> >>>>>>   + VMSTATE_UINT32_V(irqs_to_inject, HPETTimer, 3),
> >>>>>>   + VMSTATE_UINT32_V(irq_rate, HPETTimer, 3),
> >>>>>>   + VMSTATE_UINT32_V(divisor, HPETTimer, 3),
> >>>>>   We ought to be able to use a subsection keyed off of whether any
> >>>> ticks
> >>>>>   are currently accumulated, no?
> >>>>
> >>>> Anthony,
> >>>>
> >>>> I'm not sure if I understand your question correctly. Are you suggesting
> >>>> to migrate the driftfix-related state conditionally / only if there are
> >>>> any ticks accumulated in 'ticks_not_accounted' and 'irqs_to_inject' ?
> >>>>
> >>>> The size of the driftfix-related state is 28 bytes per timer and we have
> >>>> 32 timers per HPETState, i.e. 896 additional bytes per HPETState. With a
> >>>> maximum number of 8 HPET blocks (HPETState), this amounts to 7168 bytes.
> >>>> Hence, unconditional migration of the driftfix-related state should not
> >>>> cause significant additional overhead.
> >>>>
> >>> It's not about overhead.
> >>>
> >>>> Maybe I missed something. Could you please explain which benefit you see
> >>>> in using a subsection ?
> >>> In the common case of there being no drift, you can migrate from a
> >>> qemu that supports driftfix to a qemu that doesn't.
> >>>
> >> Right, subsections are a trick.  The idea is that when you introduce new
> >> state for a device model that is not always going to be set, when you do
> >> the migration, you detect whether the state is set or not and if it's
> >> not set, instead of sending empty versions of that state (i.e.
> >> missed_ticks=0) you just don't send the new state at all.
> >>
> >> This means that you can migrate to an older version of QEMU provided the
> >> migration would work correctly.
> > Using subsections and testing for hpet option being disabled vs enabled,
> > is fine. But checking for the existence of drift, like you suggested (or
> > at least how I understood you), is very tricky. It is expected to change
> > many times during guest lifetime, and would make our migration
> > predictability something Heisenberg would be proud of.
> 
> Is this true?  I would expect it to be very tied to workloads.  For idle 
> workloads, you should never have accumulated missed ticks whereas with 
> heavy workloads, you always will have accumulated ticks.
> 
> Is that not correct?
Yes, it is , but we lose a lot of reliability by tying migration to the 
workload. Given that
we still have to start qemu the same way both sides, we end up with a
situation in which at time t, migration is possible, and at time t+1
migration is not.

I'd rather have subsections enabled at all times when the option to
allow driftfix is enabled.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]