qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 11/17] migration: Create thread infrastructure f


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 11/17] migration: Create thread infrastructure for multifd send side
Date: Tue, 14 Feb 2017 11:58:58 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

* Juan Quintela (address@hidden) wrote:
> "Dr. David Alan Gilbert" <address@hidden> wrote:
> > * Juan Quintela (address@hidden) wrote:
> >> We make the locking and the transfer of information specific, even if we
> >> are still transmiting things through the main thread.
> >> 
> >> Signed-off-by: Juan Quintela <address@hidden>
> >> ---
> >>  migration/ram.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
> >>  1 file changed, 52 insertions(+), 1 deletion(-)
> >> 
> >> diff --git a/migration/ram.c b/migration/ram.c
> >> index c71929e..9d7bc64 100644
> >> --- a/migration/ram.c
> >> +++ b/migration/ram.c
> >> @@ -392,17 +392,25 @@ void migrate_compress_threads_create(void)
> >>  /* Multiple fd's */
> >> 
> >>  struct MultiFDSendParams {
> >> +    /* not changed */
> >>      QemuThread thread;
> >>      QIOChannel *c;
> >>      QemuCond cond;
> >>      QemuMutex mutex;
> >> +    /* protected by param mutex */
> >>      bool quit;
> >>      bool started;
> >> +    uint8_t *address;
> >> +    /* protected by multifd mutex */
> >> +    bool done;
> >>  };
> >>  typedef struct MultiFDSendParams MultiFDSendParams;
> >> 
> >>  static MultiFDSendParams *multifd_send;
> >> 
> >> +QemuMutex multifd_send_mutex;
> >> +QemuCond multifd_send_cond;
> >> +
> >>  static void *multifd_send_thread(void *opaque)
> >>  {
> >>      MultiFDSendParams *params = opaque;
> >> @@ -416,7 +424,17 @@ static void *multifd_send_thread(void *opaque)
> >> 
> >>      qemu_mutex_lock(&params->mutex);
> >>      while (!params->quit){
> >> -        qemu_cond_wait(&params->cond, &params->mutex);
> >> +        if (params->address) {
> >> +            params->address = 0;
> >
> > This confused me (I wondered what happens to the 1st block) but
> > I see in the next patch this gets replaced by something more complex;
> > so I suggest just using params->dummy and commented it's about
> > to get replaced.
> 
> if you preffer, I wanted to minimize the change on the next patch,
> otherwise I also have to change the places where I check the value of
> address.
> 

OK, perhaps just adding a comment to say it's going to go in the
next patch would work.

> >> +    qemu_mutex_unlock(&multifd_send_mutex);
> >> +    qemu_mutex_lock(&multifd_send[i].mutex);
> >
> > Having a 'multifd_send_mutex' and a
> >          'multifd_send[i].mutex'
> > is pretty confusing!
> 
> For different reason, I have moved all the
> 
>   multifd_send[i]. to "p->"
> 
> Better?

Maybe!

> >
> >> +    multifd_send[i].address = address;
> >> +    qemu_cond_signal(&multifd_send[i].cond);
> >> +    qemu_mutex_unlock(&multifd_send[i].mutex);
> >> +
> >> +    return 0;
> >> +}
> >> +
> >>  struct MultiFDRecvParams {
> >>      QemuThread thread;
> >>      QIOChannel *c;
> >> @@ -1015,6 +1065,7 @@ static int ram_multifd_page(QEMUFile *f, 
> >> PageSearchStatus *pss,
> >>          *bytes_transferred +=
> >>              save_page_header(f, block, offset | 
> >> RAM_SAVE_FLAG_MULTIFD_PAGE);
> >>          qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
> >> +        multifd_send_page(p);
> >>          *bytes_transferred += TARGET_PAGE_SIZE;
> >>          pages = 1;
> >>          acct_info.norm_pages++;
> >> -- 
> >> 2.9.3
> >
> > I think I'm pretty OK with this; but we'll see what it looks like
> > after you think about Paolo's suggestion; it does feel like it should
> > be possible to do the locking etc simpler; I just don't know how.
> 
> Locking can be simpler, but the problem is being speed :-(
> Paolo suggestion have helped.
> That our meassurement of bandwidth is lame, haven't :-(

Are you sure that your performance problems are anything to do with locking?

Dave

> Later, Juan.
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]