qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/4] Multiple interface support on top of Multi-FD


From: manish.mishra
Subject: Re: [PATCH 0/4] Multiple interface support on top of Multi-FD
Date: Fri, 10 Jun 2022 17:58:31 +0530
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.10.0


On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
On Thu, Jun 09, 2022 at 07:33:01AM +0000, Het Gala wrote:
As of now, the multi-FD feature supports connection over the default network
only. This Patchset series is a Qemu side implementation of providing multiple
interfaces support for multi-FD. This enables us to fully utilize dedicated or
multiple NICs in case bonding of NICs is not possible.


Introduction
-------------
Multi-FD Qemu implementation currently supports connection only on the default
network. This forbids us from advantages like:
- Separating VM live migration traffic from the default network.

Hi Daniel,

I totally understand your concern around this approach increasing compexity 
inside qemu,

when similar things can be done with NIC teaming. But we thought this approach 
provides

much more flexibility to user in few cases like.

1. We checked our customer data, almost all of the host had multiple NIC, but 
LACP support

    in their setups was very rare. So for those cases this approach can help in 
utilise multiple

    NICs as teaming is not possible there.

2. We have seen requests recently to separate out traffic of storage, VM 
netwrok, migration

    over different vswitch which can be backed by 1 or more NICs as this give 
better

    predictability and assurance. So host with multiple ips/vswitches can be 
very common

    environment. In this kind of enviroment this approach gives per vm or 
migration level

    flexibilty, like for critical VM we can still use bandwidth from all 
available vswitch/interface

    but for normal VM they can keep live migration only on dedicated NICs 
without changing

    complete host network topology.

    At final we want it to be something like this [<ip-pair>, <multiFD-channels>, 
<bandwidth_control>]

    to provide bandwidth_control per interface.

3. Dedicated NIC we mentioned as a use case, agree with you it can be done 
without this

    approach too.

Perhaps I'm mis-understanding your intent here, but AFAIK it
has been possible to separate VM migration traffic from general
host network traffic essentially forever.

If you have two NICs with IP addresses on different subnets,
then the kernel will pick which NIC to use automatically
based on the IP address of the target matching the kernel
routing table entries.

Management apps have long used this ability in order to
control which NIC migration traffic flows over.

- Fully utilize all NICs’ capacity in cases where creating a LACP bond (Link
   Aggregation Control Protocol) is not supported.
Can you elaborate on scenarios in which it is impossible to use LACP
bonding at the kernel level ?
Yes, as mentioned above LACP support was rare in customer setups.
Multi-interface with Multi-FD
-----------------------------
Multiple-interface support over basic multi-FD has been implemented in the
patches. Advantages of this implementation are:
- Able to separate live migration traffic from default network interface by
   creating multiFD channels on ip addresses of multiple non-default interfaces.
- Can optimize the number of multi-FD channels on a particular interface
   depending upon the network bandwidth limit on a particular interface.
Manually assigning individual channels to different NICs is a pretty
inefficient way to optimizing traffic. Feels like you could easily get
into a situation where one NIC ends up idle while the other is busy,
especially if the traffic patterns are different. For example with
post-copy there's an extra channel for OOB async page requests, and
its far from clear that manually picking NICs per chanel upfront is
going work for that.  The kernel can continually dynamically balance
load on the fly and so do much better than any static mapping QEMU
tries to apply, especially if there are multiple distinct QEMU's
competing for bandwidth.

Yes, Daniel current solution is only for pre-copy. As with postcopy

multiFD is not yet supported but in future we can extend it for postcopy

channels too.

Implementation
--------------

Earlier the 'migrate' qmp command:
{ "execute": "migrate", "arguments": { "uri": "tcp:0:4446" } }

Modified qmp command:
{ "execute": "migrate",
              "arguments": { "uri": "tcp:0:4446", "multi-fd-uri-list": [ {
              "source-uri": "tcp::6900", "destination-uri": "tcp:0:4480",
              "multifd-channels": 4}, { "source-uri": "tcp:10.0.0.0: ",
              "destination-uri": "tcp:11.0.0.0:7789",
              "multifd-channels": 5} ] } }
------------------------------------------------------------------------------

Earlier the 'migrate-incoming' qmp command:
{ "execute": "migrate-incoming", "arguments": { "uri": "tcp::4446" } }

Modified 'migrate-incoming' qmp command:
{ "execute": "migrate-incoming",
             "arguments": {"uri": "tcp::6789",
             "multi-fd-uri-list" : [ {"destination-uri" : "tcp::6900",
             "multifd-channels": 4}, {"destination-uri" : "tcp:11.0.0.0:7789",
             "multifd-channels": 5} ] } }
------------------------------------------------------------------------------
These examples pretty nicely illustrate my concern with this
proposal. It is making QEMU configuration of migration
massively more complicated, while duplicating functionality
the kernel can provide via NIC teaming, but without having
ability to balance it on the fly as the kernel would.

Yes, agree Daniel this raises complexity but we will make sure that it does not

change/imapct anything existing and we provide new options as optional.

Few of the things about this may not be taken care currently as this was to

get some early feedback but we will definately modify that.


With regards,
Daniel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]