qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 07/16] migration: Create x-multifd-group paramet


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [PATCH 07/16] migration: Create x-multifd-group parameter
Date: Mon, 13 Mar 2017 17:12:14 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

On Mon, Mar 13, 2017 at 05:49:59PM +0100, Juan Quintela wrote:
> "Daniel P. Berrange" <address@hidden> wrote:
> > On Mon, Mar 13, 2017 at 01:44:25PM +0100, Juan Quintela wrote:
> >> Indicates how many pages we are going to send in each bach to a multifd
> >> thread.
> >
> >
> >> diff --git a/qapi-schema.json b/qapi-schema.json
> >> index b7cb26d..33a6267 100644
> >> --- a/qapi-schema.json
> >> +++ b/qapi-schema.json
> >> @@ -988,6 +988,9 @@
> >>  # @x-multifd-threads: Number of threads used to migrate data in parallel
> >>  #                     The default value is 2 (since 2.9)
> >>  #
> >> +# @x-multifd-group: Number of pages sent together to a thread
> >> +#                     The default value is 16 (since 2.9)
> >> +#
> >>  # Since: 2.4
> >>  ##
> >>  { 'enum': 'MigrationParameter',
> >> @@ -995,7 +998,7 @@
> >>             'cpu-throttle-initial', 'cpu-throttle-increment',
> >>             'tls-creds', 'tls-hostname', 'max-bandwidth',
> >>             'downtime-limit', 'x-checkpoint-delay',
> >> -           'x-multifd-threads'] }
> >> +           'x-multifd-threads', 'x-multifd-group'] }
> >>  
> >>  ##
> >>  # @migrate-set-parameters:
> >> @@ -1062,6 +1065,9 @@
> >>  # @x-multifd-threads: Number of threads used to migrate data in parallel
> >>  #                     The default value is 2 (since 2.9)
> >>  #
> >> +# @x-multifd-group: Number of pages sent together in a bunch
> >> +#                     The default value is 16 (since 2.9)
> >> +#
> >
> > How is this parameter supposed to be used ? Or to put it another way,
> > what are the benefits / effects of changing it from its default
> > value and can an application usefully decide what value to set ? I'm
> > loathe to see us expose another "black magic" parameter where you can't
> > easily determine what values to set, without predicting future guest
> > workloads
> 
> We have multiple threads, we can send to each thread the number of pages
> that it needs to send one by one, two by two, n x n.  The bigger the
> number, the less locking to do, and then less contention.  But if it is
> too big, we could probably end with too few distribution.  Reason to add
> this parameter is that if we send page by page, we end spending too much
> time in locking.

The question is how is an application like OpenStack / oVirt supposed to
know what the right number of pages is to get the right tradeoff between
lock contention & distribution ? Lock contention may well change over
time as the QEMU impl is improved, so the right answer for setting this
parameter might vary depending on QEMU version.  IMHO, you should just
pick a sensible default value and not expose this to applications.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]