qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH RDMA support v2: 5/6] connection-setup code


From: Orit Wasserman
Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v2: 5/6] connection-setup code between client/server
Date: Tue, 19 Feb 2013 18:39:26 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130110 Thunderbird/17.0.2

On 02/19/2013 05:41 PM, Michael R. Hines wrote:
> On 02/18/2013 03:24 AM, Orit Wasserman wrote:
>> Hi Michael, The guest device state is quite small (~100K probably less) 
>> especially when compared to the guest memory and we already are pinning the 
>> guest memory for RDMA any way I was actually wondering about the memory 
>> pinning, do we pin all guest memory pages as migration starts or on demand? 
> 
> The patch supports both methods. There's a function called 
> "rdma_server_prepare()" which optionally pins all the memory in advanced.
> 
> We prefer on-demand pinning, of course, because we want to preserve the 
> ability to do ballooning and the occasional madvise() calls.
> 
> The patch defaults to pinning everything right now for performance 
> evaluation.......... later I'll make sure to switch that off when we've 
> coalesced on a solution for the actual RDMA transfer itself.
Do you have some results as to the performance cost on demand pinning has?
> 
>> For the guest memory pages, sending the pages directly without QemuFile 
>> (that does buffering) is better, I would suggest implementing an 
>> QemuRDMAFile for this. It will have a new API for the memory pages (zero 
>> copy) so instead of using qemu_put_buffer we will call qemu_rdma_buffer or 
>> it can reimplement qemu_put_buffer (you need to add offset). As for device 
>> state which is sent in the last phase and is small you can modify the 
>> current implementation. (well Paolo sent patches that are changing this but 
>> I think buffering is still an option) The current migration implementation 
>> copies the device state into a buffer and than send the data from the buffer 
>> (QemuBufferedFile). You only need to pin this buffer, and RDMA it after all 
>> the device state was written into it. Regards, Orit 
> I like it .......... can you help me understand - how much different is this 
> design than the "QEMUFileOps" design that Paulo suggested?
> 
> Or it basically the same? ....reimplementing qemu_put_buffer/get_buffer() for 
> RDMA purposes......
> 
Yes it is basically the same :).
you will need the QEMUFileRDMA to store the rdma context (look at 
QEMUFileSocket for example) and other specific rdma parameters.
you will the rdma_file_ops (QEMUFileOps) to implement sending guest pages 
directly and pinning the buffer that contain the device state.
you won't need to change qemu_put_buffer for the device state but implement a 
rdma_put_buffer ops that will pin the buffer and send it.
As for the guest memory pages you have two options:
update qemu_put_buffer/get_buffer
add new QEMUFileOps just for rdma and call it instead of qemu_put_buffer for 
guest pages.

Cheers,
Orit
> - Michael
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]