qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] CPUID feature bits not saved with migration


From: Jamie Lokier
Subject: Re: [Qemu-devel] CPUID feature bits not saved with migration
Date: Wed, 22 Jul 2009 15:46:07 +0100
User-agent: Mutt/1.5.13 (2006-08-11)

Andre Przywara wrote:
> (Sorry for the late reply, I had some mail troubles)
> 
> Jamie Lokier wrote:
> >Andre Przywara wrote:
> >>Jamie Lokier wrote:
> >>>Anthony Liguori wrote:
> >>>>It's unclear what to do about -cpu host.  If we did migrate cpuid 
> >>>>values, then -cpu would effectively be ignored after an incoming 
> >>>>migration.
> >>>The new host might not support all the cpuid features of the old host,
> >>>whether by -cpu host or explicit cpuid.  What happens then?
> >>If you plan to migrate, you should think of this in advance.
> >
> >In my experience so far, for small sites, you don't plan migration for
> >2-3 years after starting the VM, because it's only then you realise
> >your host hardware is quite old, you buy a replacement to consolidate,
> >and you are still running the VM that you didn't know would still be
> >mission critical years later.
>
> That is one use-case for live migration. Another would be a migration
> pool with lots of machines each running some VMs. If one host is loaded,
> you can migrate to a lesser loaded one. Think of a hoster or cloud like
> environment.

I realise that, and I was objecting to the apparent assumption that a
pool type environment is the only use-case for migration.

> >At least, that's been my experience so far.  I've "cold migrated" a
> >few VMs, and in some cases from non-virtual machines to VMs.  None of
> >these could be planned when the guest was first installed, especially
> >the ones where it wasn't realised the guest would outlive the host
> >hardware.
> Fortunately it seems like that newer CPUs only _add_ CPUID bits, so this
> should not be a problem.

Again, not my experience.  They only add CPUID bits when you buy CPUs
from the same manufacturer.  Ok, there's only three manufacturers, but
they are different.

> >I have to say, unfortunately hot migration has never been an option
> >because the version of KVM running on the new host is invariably
> >incompatible with the KVM running on the old host.

> So far I have only seen problems like this if the target host KVM
> version is older than the source one. Some of these issues could be
> overcome by putting a translator application between source and target,
> but I am not sure whether the effort is worth the results.
> What kind of issues do you see? Are you migrating from newer KVMs to
> older ones?

I've never migrated to an older KVM.  Or to be honest, to a newer one.
I've tried loadvm of a previous savevm, and that didn't work from an
older KVM to a newer one.

Since then I think I've understood from on this list that
cross-version migration (loadvm or migrate) in either direction is not
supported, is not worth supporting, and should not be expected to work.

> >But if guest configuration is ever included in the saved state for
> >migration, migration will really easy.  I hope it's just as easy to do
> >"cold migration".
> Agreed. We should have a savevm section transferring the guest config.

I'm glad it's not just me :-)

> >Async: Do we save RAM state across reboots normally?  I know of OSes
> >which expect at least some part of RAM to survive reboots, so killing
> >the VM and restarting on another host would change the behaviour,
> >compared with rebooting locally; that's not transparent migration,
> >it's a subtle, unexpected behaviour change.  Unfortunately doing the
> >right thing involves savev, which pauses the guests for a long time.
> >The pause comes from saving and loading RAM, something which migration
> >handles well.
> Have you seen any real life problems with this? What are these OSes?

I've wanted to periodically snapshot a live server while it was
running, as a sort of backup against screwups, and I tried using
savevm.

It was running Windows Server 2003 at the time; now it's Server 2008,
and the host is Ubuntu Linux 8.10 with KVM built from source.  But I
don't think any of those versions are relevant.

It was unacceptable to invoke "savevm" periodically, because it took
some 20 seconds or more (I don't remember exactly) with the VM paused;
that was a live server.  I'm guessing that must have been the time to
save the RAM contents.

>From that I concluded that savevm+loadvm would take >40 seconds of
downtime to transfer across hosts.  Which isn't awful, but using the
migration facility would clearly be much less downtime.

> >There's also the small matter of migration having a totally different
> >interface compared with savevm right now, with savevm requiring a
> >dummy qcow2 disk while migration transfers control across a network
> >with no temporary file.
> You are right, that is pretty unfortunate. I worked around this 
> limitation by using the exec: prefix with migrate to let a shell script 
> dump the migration stream to disk, with the same trick you can reload 
> the state again. That worked pretty well for me in the past.

That's nice; I like that.  Maybe savevm could just use it ;-)

> BTW, do you know of any x86 machines which really allow physical CPU 
> hotplugging?

I have the impression the ES7000 does, but that's impressions from
mailing list postings which never made it completely clear.

-- Jamie




reply via email to

[Prev in Thread] Current Thread [Next in Thread]