qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/4] savevm: refactor qemu_loadvm_state().


From: Yoshiaki Tamura
Subject: Re: [Qemu-devel] [PATCH 1/4] savevm: refactor qemu_loadvm_state().
Date: Wed, 16 Jun 2010 16:46:52 +0900
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; ja; rv:1.9.1.9) Gecko/20100317 Thunderbird/3.0.4

Anthony Liguori wrote:
On 06/14/2010 09:01 PM, Yoshiaki Tamura wrote:
Anthony Liguori wrote:
On 06/03/2010 02:22 AM, Yoshiaki Tamura wrote:
Split qemu_loadvm_state(), and introduce
qemu_loadvm_state_{begin,iterate,complete,async}.
qemu_loadvm_state_async() is a function to handle a single incoming
section.

Signed-off-by: Yoshiaki Tamura<address@hidden>
---
savevm.c | 206
+++++++++++++++++++++++++++++++++++++++++++-------------------
sysemu.h | 2 +
2 files changed, 146 insertions(+), 62 deletions(-)

diff --git a/savevm.c b/savevm.c
index dc20390..aa4f98c 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1005,6 +1005,8 @@ typedef struct SaveStateEntry {

static QTAILQ_HEAD(savevm_handlers, SaveStateEntry) savevm_handlers =
QTAILQ_HEAD_INITIALIZER(savevm_handlers);
+static QLIST_HEAD(, LoadStateEntry) loadvm_handlers =
+ QLIST_HEAD_INITIALIZER(loadvm_handlers);
static int global_section_id;

static int calculate_new_instance_id(const char *idstr)
@@ -1460,14 +1462,9 @@ typedef struct LoadStateEntry {
int version_id;
} LoadStateEntry;

-int qemu_loadvm_state(QEMUFile *f)
+int qemu_loadvm_state_begin(QEMUFile *f)
{
- QLIST_HEAD(, LoadStateEntry) loadvm_handlers =
- QLIST_HEAD_INITIALIZER(loadvm_handlers);
- LoadStateEntry *le, *new_le;
- uint8_t section_type;
unsigned int v;
- int ret;

v = qemu_get_be32(f);
if (v != QEMU_VM_FILE_MAGIC)
@@ -1481,73 +1478,157 @@ int qemu_loadvm_state(QEMUFile *f)
if (v != QEMU_VM_FILE_VERSION)
return -ENOTSUP;

- while ((section_type = qemu_get_byte(f)) != QEMU_VM_EOF) {
- uint32_t instance_id, version_id, section_id;
- SaveStateEntry *se;
- char idstr[257];
- int len;
+ return 0;
+}

- switch (section_type) {
- case QEMU_VM_SECTION_START:
- case QEMU_VM_SECTION_FULL:
- /* Read section start */
- section_id = qemu_get_be32(f);
- len = qemu_get_byte(f);
- qemu_get_buffer(f, (uint8_t *)idstr, len);
- idstr[len] = 0;
- instance_id = qemu_get_be32(f);
- version_id = qemu_get_be32(f);
-
- /* Find savevm section */
- se = find_se(idstr, instance_id);
- if (se == NULL) {
- fprintf(stderr, "Unknown savevm section or instance '%s' %d\n",
idstr, instance_id);
- ret = -EINVAL;
- goto out;
- }
+static int qemu_loadvm_state_iterate(QEMUFile *f)
+{
+ LoadStateEntry *le;
+ uint32_t section_id;
+ int ret;

- /* Validate version */
- if (version_id> se->version_id) {
- fprintf(stderr, "savevm: unsupported version %d for '%s' v%d\n",
- version_id, idstr, se->version_id);
- ret = -EINVAL;
- goto out;
- }
+ section_id = qemu_get_be32(f);

But we're still blocking in qemu_get_be32() so I don't see how this
makes it async.

In that sense, it's not completely async, but still better than being
in the while loop that doesn't accept any commands on the incoming side.

What's confusing me is I don't understand how it's accepting command on
the incoming side.

By my reading of the code, you set a callback on read and then in the
read callback, you invoke iterate(). When iterate completes, you fall
back to the main loop. This sort of works only because the effect is
that after each iteration, by falling back to the main loop you can run
all handlers (including the monitor's handlers).

But there are some serious problems with this approach. When iterate()
completes, you've not necessarily exhausted the QEMUFile's buffer. If
the buffer is holding the full contents of the final stage of migration,
then there is not going to be any more data available to read from the
socket which means that when you drop to the main loop, you'll never
execute async again. You could probably easily reproduce this problem by
making the QEMUFile buffer very large. I think you're getting lucky
because the final stage is probably larger than 32k in most circumstances.

In the very least, you should use a bottom half instead
qemu_set_fd_handler2. It's still not async but I'm not sure whether it's
a good enough solution.

Thank you for pointing out the problem.
So instead of waiting data with select which is event driven, bottom half would at least give us a chance to check the QEMUFile's buffer with specific interval. This should fix the problem you mentioned above. However, if the network connection is lost, as you imagine, we'll still be blocked at fill_buffer(), I guess.

What I eventually need is a mechanism to prevent from blocking during migration, so using bottom half instead of qemu_set_fd_handler2 isn't perfect solution for me. Probably, introducing a header which describes the data to be received into the savevm format, and select until incoming side receives the expected data could be a solution. But changing the savevm format just for this doesn't seem to be good.

I believe some people also needs async incoming migration framework too. (Juan?)
What I would like to ask is, should I keep working to improve this approach?
If it's not a good approach for making other people happy, I would like to drop this work for now, and discuss a better solution in the future.

Thanks,

Yoshi


Regards,

Anthony Liguori

To make things completely async, I think we should solve the lock
issue and add a thread for incoming migration, but I don't think
that's your opinion.

Did I misunderstand your comment for the previous approach?

Thanks,

Yoshi


Regards,

Anthony Liguori












reply via email to

[Prev in Thread] Current Thread [Next in Thread]