qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] vm state save/restore question


From: Alexander Graf
Subject: Re: [Qemu-devel] vm state save/restore question
Date: Wed, 20 Jun 2012 00:27:06 +0200

On 19.06.2012, at 23:51, Benjamin Herrenschmidt wrote:

> On Tue, 2012-06-19 at 23:48 +0200, Alexander Graf wrote:
>>> We could keep track manually maybe using some kind of dirty bitmap of
>>> changes to the hash table but that would add overhead to things like
>>> H_ENTER.
>> 
>> Only during migration, right?
> 
> True. It will be an "interesting" user/kernel API tho ... I'll give it more 
> thoughts.

Well, all we need is 2 user space pointers in an ENABLE_CAP call. And maybe a 
DISABLE_CAP call to disable the syncing again.

void *htab
u8 *htab_dirty;

ENABLE_CAP(KVM_PPC_SYNC_HTAB, htab, htab_dirty);

which would then make all the current GVA->GPA entries visible to the htab 
pointer. That view is always current. H_ENTER and friends update it in parallel 
to the GVA->HPA htab. We don't have to keep H_ENTER super fast during 
migration, so we can easily go to virtual mode for that one. Any time an entry 
changes, the dirty bitmap gets updated.

Usually, migration ends in killing the VM. But we shouldn't rely on that. 
Instead, we should provide an API to stop the synced mode again. Maybe

  ENABLE_CAP(KVM_PPC_SYNC_HTAB, NULL, NULL);

:)

> I need to understand better how do that vs. qemu save/restore though. IE. 
> That means
> we can't just save the hash as a bulk and reload it, but we'd have to save 
> bits of
> it at a time or something like that no ? Or do we save it once, then save the 
> diff
> at the end ?

The best way would be to throw it into the same bucket as RAM. At the end of 
the day, it really is no different. It'd then be synced during every iteration 
of the migration.


Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]