[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 00/16] Multifd v4
From: |
Dr. David Alan Gilbert |
Subject: |
Re: [Qemu-devel] [PATCH 00/16] Multifd v4 |
Date: |
Tue, 14 Mar 2017 12:22:23 +0000 |
User-agent: |
Mutt/1.7.1 (2016-10-04) |
* Daniel P. Berrange (address@hidden) wrote:
> On Tue, Mar 14, 2017 at 10:21:43AM +0000, Dr. David Alan Gilbert wrote:
> > * Juan Quintela (address@hidden) wrote:
> > > Hi
> > >
> > > This is the 4th version of multifd. Changes:
> > > - XBZRLE don't need to be checked for
> > > - Documentation and defaults are consistent
> > > - split socketArgs
> > > - use iovec instead of creating something similar.
> > > - We use now the exported size of target page (another HACK removal)
> > > - created qio_chanel_{wirtev,readv}_all functions. the _full() name
> > > was already taken.
> > > What they do is the same that the without _all() function, but if it
> > > returns due to blocking it redo the call.
> > > - it is checkpatch.pl clean now.
> > >
> > > Please comment, Juan.
> >
> > High level things,
> > a) I think you probably need to do some bandwidth measurements to show
> > that multifd is managing to have some benefit - it would be good
> > for the cover letter.
>
> Presumably this would be a building block to solving the latency problems
> with post-copy, by reserving one channel for use transferring out of band
> pages required by target host page faults.
Right, it's on my list to look at; there's some interesting questions about
the way in which the main fd carrying the headers interacts, and also what
happens to pages immediately after the requested page; for example, lets
say we're currently streaming at address 'S' and a postcopy request (P) comes
in;
so what we currently have on one FD is:
S,S+1....S+n,P,P+1,P+2,P+n
Note that when a request comes in we flip location so we start sending
background
pages from P+1 on the assumption that they'll be wanted soon.
with 3 FDs this would go initially as:
S S+3 P+1 P+4
S+1 S+4 P+2 ..
S+2 P P+3 ..
now if we had a spare FD for postcopy we'd do:
S S+3 P+1 P+4
S+1 S+4 P+2 ..
S+2 S+5 P+3 ..
- P - -
So 'P' got there quickly - but P+1 is stuck behind the S's; is that what we
want?
An interesting alternative would be to switch which fd we keep free:
S S+3 - - -
S+1 S+4 P+2 P+4
S+2 S+5 P+3 P+5
- P P+1 P+6
So depending on your buffering P+1 might also now be pretty fast; but that's
starting to get into heuristics about guessing how much you should put on
your previously low-queue'd fd.
Dave
> Regards,
> Daniel
> --
> |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org -o- http://virt-manager.org :|
> |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK
- [Qemu-devel] [PATCH 14/16] migration: Test new fd infrastructure, (continued)
- [Qemu-devel] [PATCH 14/16] migration: Test new fd infrastructure, Juan Quintela, 2017/03/13
- [Qemu-devel] [PATCH 13/16] migration: Create thread infrastructure for multifd recv side, Juan Quintela, 2017/03/13
- [Qemu-devel] [PATCH 16/16] migration: Flush receive queue, Juan Quintela, 2017/03/13
- [Qemu-devel] [PATCH 15/16] migration: Transfer pages over new channels, Juan Quintela, 2017/03/13
- Re: [Qemu-devel] [PATCH 00/16] Multifd v4, Dr. David Alan Gilbert, 2017/03/14
- Re: [Qemu-devel] [PATCH 00/16] Multifd v4, Daniel P. Berrange, 2017/03/14