qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 0/6] save/restore on Xen


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH v4 0/6] save/restore on Xen
Date: Mon, 23 Jan 2012 10:54:15 -0600
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.23) Gecko/20110922 Lightning/1.0b2 Thunderbird/3.1.15

On 01/23/2012 10:46 AM, Stefano Stabellini wrote:
On Mon, 23 Jan 2012, Anthony Liguori wrote:
On 01/23/2012 04:47 AM, Stefano Stabellini wrote:
On Fri, 20 Jan 2012, Jan Kiszka wrote:
On 2012-01-20 18:20, Stefano Stabellini wrote:
Hi all,
this is the fourth version of the Xen save/restore patch series.
We have been discussing this issue for quite a while on #qemu and
qemu-devel:


http://marc.info/?l=qemu-devel&m=132346828427314&w=2
http://marc.info/?l=qemu-devel&m=132377734605464&w=2


A few different approaches were proposed to achieve the goal
of a working save/restore with upstream Qemu on Xen, however after
prototyping some of them I came up with yet another solution, that I
think leads to the best results with the less amount of code
duplications and ugliness.
Far from saying that this patch series is an example of elegance and
simplicity, but it is closer to acceptable anything else I have seen so
far.

What's new is that Qemu is going to keep track of its own physmap on
xenstore, so that Xen can be fully aware of the changes Qemu makes to
the guest's memory map at any time.
This is all handled by Xen or Xen support in Qemu internally and can be
used to solve our save/restore framebuffer problem.

 From the Qemu common code POV, we still need to avoid saving the guest's
ram when running on Xen, and we need to avoid resetting the videoram on
restore (that is a benefit to the generic Qemu case too, because it
saves few cpu cycles).

For my understanding: Refraining from the memset is required as the
already restored vram would then be overwritten?

Yep

Or what is the ordering
of init, RAM restore, and initial device reset now?

RAM restore (done by Xen)

physmap rebuild (done by xen_hvm_init in qemu)
pc_init()
qemu_system_reset()
load_vmstate()

That's your problem.  You don't want to do load_vmstate().  You just want to
load the device model, not RAM.

True


Why not introduce new Xen specific commands like I suggested on IRC?

Introducing a Xen specific command is not an issue, but I didn't want to
duplicate all the functionalities currently in savevm.c.

The code is fairly reusable since live migration and savevm use the same internal bits. I think you would just need another version of qemu_loadvm_state(). That function is only a hundred lines or so so you shouldn't be duplicating much at all.

You should have a separate load_device_state() function and mark anything that
is RAM as RAM when doing savevm registration.  Better yet, mark devices as
devices since that's what you really care about.

I dropped this approach because I thought it causes too much code
duplication.

Then you're doing it wrong :-)

But even if there is, just refactor out the common code.

However, following your suggestion, if I add a generic "device" flag in
SaveStateEntry and implement a generic qemu_save_device_state in
savevm.c, I believe that the duplication of code would be small.
And patch #1 could go away.

Yup.



However the issue of patch #4, "do not reset videoram on resume", still
remains: no matter what parameter I pass to Qemu, if qemu_system_reset
is called on resume the videoram is going to be overwritten by 0xff.

The memset(0xff) looks dubious to me. My guess is that this could be moved to the vgabios-cirrus which would solve your problem.

In this regard, don't you think it would be advantageous to Qemu in
general not to reset the videram in resume? It can be pretty large, so
it is a significant waste of a memset.

It claims to fix a real bug. Moving the memset to vgabios would do what you want to do in a more robust way I think.

Regards,

Anthony Liguori



reply via email to

[Prev in Thread] Current Thread [Next in Thread]