qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH-for-9.1 v2 2/3] migration: Remove RDMA protocol handling


From: Gonglei (Arei)
Subject: RE: [PATCH-for-9.1 v2 2/3] migration: Remove RDMA protocol handling
Date: Tue, 7 May 2024 01:50:43 +0000

Hello,

> -----Original Message-----
> From: Peter Xu [mailto:peterx@redhat.com]
> Sent: Monday, May 6, 2024 11:18 PM
> To: Gonglei (Arei) <arei.gonglei@huawei.com>
> Cc: Daniel P. Berrangé <berrange@redhat.com>; Markus Armbruster
> <armbru@redhat.com>; Michael Galaxy <mgalaxy@akamai.com>; Yu Zhang
> <yu.zhang@ionos.com>; Zhijian Li (Fujitsu) <lizhijian@fujitsu.com>; Jinpu Wang
> <jinpu.wang@ionos.com>; Elmar Gerdes <elmar.gerdes@ionos.com>;
> qemu-devel@nongnu.org; Yuval Shaia <yuval.shaia.ml@gmail.com>; Kevin Wolf
> <kwolf@redhat.com>; Prasanna Kumar Kalever
> <prasanna.kalever@redhat.com>; Cornelia Huck <cohuck@redhat.com>;
> Michael Roth <michael.roth@amd.com>; Prasanna Kumar Kalever
> <prasanna4324@gmail.com>; integration@gluster.org; Paolo Bonzini
> <pbonzini@redhat.com>; qemu-block@nongnu.org; devel@lists.libvirt.org;
> Hanna Reitz <hreitz@redhat.com>; Michael S. Tsirkin <mst@redhat.com>;
> Thomas Huth <thuth@redhat.com>; Eric Blake <eblake@redhat.com>; Song
> Gao <gaosong@loongson.cn>; Marc-André Lureau
> <marcandre.lureau@redhat.com>; Alex Bennée <alex.bennee@linaro.org>;
> Wainer dos Santos Moschetta <wainersm@redhat.com>; Beraldo Leal
> <bleal@redhat.com>; Pannengyuan <pannengyuan@huawei.com>;
> Xiexiangyou <xiexiangyou@huawei.com>
> Subject: Re: [PATCH-for-9.1 v2 2/3] migration: Remove RDMA protocol handling
> 
> On Mon, May 06, 2024 at 02:06:28AM +0000, Gonglei (Arei) wrote:
> > Hi, Peter
> 
> Hey, Lei,
> 
> Happy to see you around again after years.
> 
Haha, me too.

> > RDMA features high bandwidth, low latency (in non-blocking lossless
> > network), and direct remote memory access by bypassing the CPU (As you
> > know, CPU resources are expensive for cloud vendors, which is one of
> > the reasons why we introduced offload cards.), which TCP does not have.
> 
> It's another cost to use offload cards, v.s. preparing more cpu resources?
> 
Software and hardware offload converged architecture is the way to go for all 
cloud vendors 
(Including comprehensive benefits in terms of performance, cost, security, and 
innovation speed), 
it's not just a matter of adding the resource of a DPU card.

> > In some scenarios where fast live migration is needed (extremely short
> > interruption duration and migration duration) is very useful. To this
> > end, we have also developed RDMA support for multifd.
> 
> Will any of you upstream that work?  I'm curious how intrusive would it be
> when adding it to multifd, if it can keep only 5 exported functions like what
> rdma.h does right now it'll be pretty nice.  We also want to make sure it 
> works
> with arbitrary sized loads and buffers, e.g. vfio is considering to add IO 
> loads to
> multifd channels too.
> 

In fact, we sent the patchset to the community in 2021. Pls see:
https://lore.kernel.org/all/20210203185906.GT2950@work-vm/T/


> One thing to note that the question here is not about a pure performance
> comparison between rdma and nics only.  It's about help us make a decision
> on whether to drop rdma, iow, even if rdma performs well, the community still
> has the right to drop it if nobody can actively work and maintain it.
> It's just that if nics can perform as good it's more a reason to drop, unless
> companies can help to provide good support and work together.
> 

We are happy to provide the necessary review and maintenance work for RDMA
if the community needs it.

CC'ing Chuan Zheng.


Regards,
-Gonglei


reply via email to

[Prev in Thread] Current Thread [Next in Thread]