qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v2 7/9] migration/multifd: Isolate ram pages packet data


From: Maciej S. Szmigiero
Subject: Re: [RFC PATCH v2 7/9] migration/multifd: Isolate ram pages packet data
Date: Wed, 24 Jul 2024 11:30:16 +0200
User-agent: Mozilla Thunderbird

On 22.07.2024 23:06, Peter Xu wrote:
On Mon, Jul 22, 2024 at 05:34:44PM -0300, Fabiano Rosas wrote:
Peter Xu <peterx@redhat.com> writes:

On Mon, Jul 22, 2024 at 02:59:12PM -0300, Fabiano Rosas wrote:
While we cannot yet disentangle the multifd packet from page data, we
can make the code a bit cleaner by setting the page-related fields in
a separate function.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
  migration/multifd.c | 97 ++++++++++++++++++++++++++++-----------------
  1 file changed, 61 insertions(+), 36 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index fcdb12e04f..d25b8658b2 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -409,65 +409,62 @@ static int multifd_recv_initial_packet(QIOChannel *c, 
Error **errp)
      return msg.id;
  }
-void multifd_send_fill_packet(MultiFDSendParams *p)
+static void multifd_ram_fill_packet(MultiFDSendParams *p)
  {
      MultiFDPacket_t *packet = p->packet;
      MultiFDPages_t *pages = &p->data->u.ram;
-    uint64_t packet_num;
      uint32_t zero_num = pages->num - pages->normal_num;
-    int i;
- packet->flags = cpu_to_be32(p->flags);
      packet->pages_alloc = cpu_to_be32(pages->allocated);
      packet->normal_pages = cpu_to_be32(pages->normal_num);
      packet->zero_pages = cpu_to_be32(zero_num);
-    packet->next_packet_size = cpu_to_be32(p->next_packet_size);

Definitely good intention, but I had a feeling that this will need to be
reorganized again when Maciej reworks on top, due to the fact that
next_packet_size will be ram-private field, simply because it's defined
after pages_alloc and normal_pages...

E.g., see:

https://lore.kernel.org/r/41dedaf2c9abebb5e45f88c052daa26320715a92.1718717584.git.maciej.szmigiero@oracle.com

Where the new MultiFDPacketHdr_t cannot include next_packet_size (even if
VFIO will need that too..).

Isn't it just a matter of setting next_packet_size in the other path as
well?

@Maciej, can you comment?


typedef struct {
     uint32_t magic;
     uint32_t version;
     uint32_t flags;
} __attribute__((packed)) MultiFDPacketHdr_t;

So _maybe_ it's easier we drop this patch and leave that part to Maciej to
identify which is common and which is arm/vfio specific.  No strong
opinions here.


I could drop it if that's preferrable. However, patch 8/9 is absolutely
necessary so we can remove this madness of having to clear
MultiFDPages_t fields at the multifd_send_thread() top level. It took me
a whole day to figure that one out and that bug has been manifesting
ever since I started working on multifd.

I'm not sure how we'll do that without this patch. Maybe it's better I
fix in this one whatever you guys think needs fixing.

You can set next_packet_size only in ram path, or you can keep this patch
as is and let Maciej rework it on top.

I'm fine either way.


I need to split out the packet header parsing ("unfilling") anyway from
RAM / device state parsing (to detect the packet type before reading the
rest of it to the appropriate data structure) so either way is fine with
me too.

I think if it's easier for Fabiano to work on top of this patch then he
should just keep it.

Thanks,
Maciej




reply via email to

[Prev in Thread] Current Thread [Next in Thread]