qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/4] Multiple interface support on top of Multi-FD


From: Daniel P . Berrangé
Subject: Re: [PATCH 0/4] Multiple interface support on top of Multi-FD
Date: Thu, 16 Jun 2022 09:27:01 +0100
User-agent: Mutt/2.2.1 (2022-02-19)

On Wed, Jun 15, 2022 at 05:43:28PM +0100, Daniel P. Berrangé wrote:
> On Fri, Jun 10, 2022 at 05:58:31PM +0530, manish.mishra wrote:
> > 
> > On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
> > > On Thu, Jun 09, 2022 at 07:33:01AM +0000, Het Gala wrote:
> > > > As of now, the multi-FD feature supports connection over the default 
> > > > network
> > > > only. This Patchset series is a Qemu side implementation of providing 
> > > > multiple
> > > > interfaces support for multi-FD. This enables us to fully utilize 
> > > > dedicated or
> > > > multiple NICs in case bonding of NICs is not possible.
> > > > 
> > > > 
> > > > Introduction
> > > > -------------
> > > > Multi-FD Qemu implementation currently supports connection only on the 
> > > > default
> > > > network. This forbids us from advantages like:
> > > > - Separating VM live migration traffic from the default network.
> > 
> > Hi Daniel,
> > 
> > I totally understand your concern around this approach increasing compexity 
> > inside qemu,
> > 
> > when similar things can be done with NIC teaming. But we thought this 
> > approach provides
> > 
> > much more flexibility to user in few cases like.
> > 
> > 1. We checked our customer data, almost all of the host had multiple NIC, 
> > but LACP support
> > 
> >     in their setups was very rare. So for those cases this approach can 
> > help in utilise multiple
> > 
> >     NICs as teaming is not possible there.
> 
> AFAIK,  LACP is not required in order to do link aggregation with Linux.
> Traditional Linux bonding has no special NIC hardware or switch requirements,
> so LACP is merely a "nice to have" in order to simplify some aspects.
> 
> IOW, migration with traffic spread across multiple NICs is already
> possible AFAICT.
> 
> I can understand that some people may not have actually configured
> bonding on their hosts, but it is not unreasonable to request that
> they do so, if they want to take advantage fo aggrated bandwidth.
> 
> It has the further benefit that it will be fault tolerant. With
> this proposal if any single NIC has a problem, the whole migration
> will get stuck. With kernel level bonding, if any single NIC haus
> a problem, it'll get offlined by the kernel and migration will
> continue to  work across remaining active NICs.
> 
> > 2. We have seen requests recently to separate out traffic of storage, VM 
> > netwrok, migration
> > 
> >     over different vswitch which can be backed by 1 or more NICs as this 
> > give better
> > 
> >     predictability and assurance. So host with multiple ips/vswitches can 
> > be very common
> > 
> >     environment. In this kind of enviroment this approach gives per vm or 
> > migration level
> > 
> >     flexibilty, like for critical VM we can still use bandwidth from all 
> > available vswitch/interface
> > 
> >     but for normal VM they can keep live migration only on dedicated NICs 
> > without changing
> > 
> >     complete host network topology.
> > 
> >     At final we want it to be something like this [<ip-pair>, 
> > <multiFD-channels>, <bandwidth_control>]
> > 
> >     to provide bandwidth_control per interface.
> 
> Again, it is already possible to separate migration traffic from storage
> traffic, from other network traffic. The target IP given will influence
> which NIC is used based on routing table and I know this is already
> done widely with OpenStack deployments.

Actually I should clarify this is only practical if the two NICs are
using different IP subnets, otherwise routing rules are not viable.
So needing to set source IP would be needed to select between a pair
of NICs on the same IP subnet.

Previous usage I've seen has always setup fully distinct IP subnets
for generic vs storage vs migration network traffic.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]