qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 3/8] fdc: Introduce fdctrl->phase


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH v2 3/8] fdc: Introduce fdctrl->phase
Date: Fri, 29 May 2015 11:34:34 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

* Kevin Wolf (address@hidden) wrote:
> Am 29.05.2015 um 11:38 hat Dr. David Alan Gilbert geschrieben:
> > * Kevin Wolf (address@hidden) wrote:
> > > Am 29.05.2015 um 10:33 hat Dr. David Alan Gilbert geschrieben:
> > > > * Markus Armbruster (address@hidden) wrote:
> > > > > "Dr. David Alan Gilbert" <address@hidden> writes:
> > > > > 
> > > > > > * John Snow (address@hidden) wrote:
> > > > > >> 
> > > > > >> 
> > > > > >> On 05/21/2015 09:19 AM, Kevin Wolf wrote:
> > > > > >> > The floppy controller spec describes three different controller 
> > > > > >> > phases,
> > > > > >> > which are currently not explicitly modelled in our emulation. 
> > > > > >> > Instead,
> > > > > >> > each phase is represented by a combination of flags in registers.
> > > > > >> > 
> > > > > >> > This patch makes explicit in which phase the controller 
> > > > > >> > currently is.
> > > > > >> > 
> > > > > >> > Signed-off-by: Kevin Wolf <address@hidden>
> > > > 
> > > > <snip>
> > > > 
> > > > > >> > +static bool fdc_phase_needed(void *opaque)
> > > > > >> > +{
> > > > > >> > +    FDCtrl *fdctrl = opaque;
> > > > > >> > +
> > > > > >> > +    return reconstruct_phase(fdctrl) != fdctrl->phase;
> > > > > >> > +}
> > > > > >
> > > > > > OK, that is one way of doing it which should work, as long
> > > > > > as most of the time that matches.
> > > > > >
> > > > > > My preference for subsections is normally to make them just
> > > > > > conditional on machine type, that way backwards migration just
> > > > > > works.  However, if the result of the backwards migration would
> > > > > > be data corruption (which if I understand what you saying it could
> > > > > > in this case) then it's arguably better to fail migration in those
> > > > > > cases.
> > > > > 
> > > > > This isn't a matter of preference.
> > > > > 
> > > > > Subsections were designed to solves a problem we only have because 
> > > > > we're
> > > > > not perfect: device model bugs, namely forgetting to migrate a bit of
> > > > > state, or failing to model it in the first place.
> > > > > 
> > > > > Subsections tied to machine types solve an entirely different problem:
> > > > > the old version of the device is just fine, including migration, but 
> > > > > we
> > > > > now want a new and improved variant, which needs some more state
> > > > > migrated.  Putting that piece of state into a subsection tied to the 
> > > > > new
> > > > > machine type avoids code duplication.
> > > > > 
> > > > > But what do you do for device model bugs serious enough to need fixing
> > > > > even for existing machine types, when the fix needs additional state
> > > > > migrated?  Subsections tied to machine types are *not* a solution 
> > > > > there.
> > > > > That's why this isn't about preference!  It's about having a bug to 
> > > > > fix
> > > > > or not.
> > > > 
> > > > My problem is that people keep adding subsections to fix minor device
> > > > bugs because they think breaking migration is irrelevant.  If the bug
> > > > being fixed causes data corruption then OK I agree it is probably better
> > > > to break migration, otherwise  you need to make a decision about whether
> > > > fixing the device bug during migration is better or worse than having
> > > > the migration fail.
> > > > Some people say 'Migration failure is ok - people just try it again';
> > > > sorry, no, that's useless in environments where the VM has 100s of GB
> > > > of RAM and takes hours to migrate only for it then to fail, and it
> > > > was urgent that it was migrated off that source host.
> > > 
> > > If this is a problem in practice, it sounds a lot like we need to
> > > improve our handling of that situation in general. Why do we abort
> > > the whole migration when serialising the state fails? Can't we abort
> > > just the completion and switch back to the live state, and then retry a
> > > bit later?
> > 
> > It's not the serialisation that fails, it's the deserialisation on the 
> > destination,
> > and our migration stream is currently one way, so we have no way
> > of signalling back to the source to 'just try that again'.
> > So the migration fails, the source carries on running and we can only
> > try again from the beginning.
> 
> True, forgot about that.
> 
> So what if the management tool queried the destination for supported
> subsections and passed that as an (optional) argument to the migration
> command on the source?
> 
> Then the source would have full control and could complete migration
> only when no unsupported subsection are needed.

I tried doing something similar, where I had an optional parameter on
the source which was the version of the destination qemu; people really
hated that.

> > > Most, if not all, of the subsections you mentioned that fix minor bugs
> > > fix some states while the device is actively used and shorty after we
> > > should be back in a state where the subsection isn't needed.
> > 
> > I think once any (non-iterative) device state has been sent we can't
> > necessarily abandon and try again; I'm not confident that the devices
> > would work if we sent one set of state and then tried to reload it later.
> 
> Why would serialising the device state change any device state? Don't
> you already continue running the source after sending non-iterative
> device state when migration fails on the destination?

It's the destination I'm worried about here, not the source; lets say
you have two devices, a & b.  'a' gets serialised, but then 'b' finds
it has to wait, so we return to running the source and sending pages
across.   However the destination has already loaded the 'a' device state;
so that when we serialise again we send a new 'a' device state'; I'm
worrying here that the destination 'a' state loader would get upset
trying to load the same state twice without somehow resetting it.

> > I think that would need a further pass that went around all the devices and
> > said 'is it OK to migrate now' and that would happen before any of the
> > non-iterative devices were migrated.
> 
> Yes, another pass is an option, too.
> 
> > > This is exactly the "fails 1 in 100 migrations" cases you talked about,
> > > and they would be practically fixed if you made a few more attempts to
> > > complete the migration before you let it fail (or perhaps even retry
> > > indefinitely, as long as the user doesn't cancel migration).
> > 
> > This causes a problem if your belief in the safe-state to migrate
> > doesn't happen on some new OS version, because then it will retry
> > indefinitely.  You'd know you didn't need to wait on the newer machine
> > type, but you could still use this trick on the older machine type.
> > (Of course if you knew you were going to a new QEMU that could
> > always accept the new section then you wouldn't worry about this -
> > but people dont like sending qemu versions).
> 
> Is retrying indefinitely a worse problem than failing migration, though?
> Anyway, this was just a thought. You could as well give up after a
> number of attempts and just increase that number from 1 currently to
> at least maybe 10 or so. (Potentially even configurable, with
> indefinitely being one option.)

No, it's probably better than failing the migration, but what I'm worrying
about is, as in your code, where the destination has the new version
of qemu and could cope with the new subsections, so there is no need
to wait.

Dave

> 
> Kevin
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]