On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote:
Or graphically:
1) client fills the active slot with data. Channels point to nothing
at this point:
[a] <-- active slot
[][][][] <-- free slots, one per-channel
[][][][] <-- channels' p->data pointers
2) multifd_send() swaps the pointers inside the client slot. Channels
still point to nothing:
[]
[a][][][]
[][][][]
3) multifd_send() finds an idle channel and updates its pointer:
It seems the action "finds an idle channel" is in step 2 rather than step 3,
which means the free slot is selected based on the id of the channel found, am I
understanding correctly?
I think you're right.
Actually I also feel like the desription here is ambiguous, even though I
think I get what Fabiano wanted to say.
The free slot should be the first step of step 2+3, here what Fabiano
really wanted to suggest is we move the free buffer array from multifd
channels into the callers, then the caller can pass in whatever data to
send.
So I think maybe it's cleaner to write it as this in code (note: I didn't
really change the code, just some ordering and comments):
===8<===
@@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots)
*/
active_slot = slots->active;
slots->active = slots->free[p->id];
- p->data = active_slot;
-
- /*
- * By the next time we arrive here, the channel will certainly
- * have consumed the active slot. Put it back on the free list
- * now.
- */
slots->free[p->id] = active_slot;
+ /* Assign the current active slot to the chosen thread */
+ p->data = active_slot;
===8<===
The comment I removed is slightly misleading to me too, because right now
active_slot contains the data hasn't yet been delivered to multifd, so
we're "putting it back to free list" not because of it's free, but because
we know it won't get used until the multifd send thread consumes it
(because before that the thread will be busy, and we won't use the buffer
if so in upcoming send()s).
And then when I'm looking at this again, I think maybe it's a slight
overkill, and maybe we can still keep the "opaque data" managed by multifd.
One reason might be that I don't expect the "opaque data" payload keep
growing at all: it should really be either RAM or device state as I
commented elsewhere in a relevant thread, after all it's a thread model
only for migration purpose to move vmstates..