qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages a


From: Juan Quintela
Subject: Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time
Date: Tue, 08 Aug 2017 14:11:37 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux)

"Dr. David Alan Gilbert" <address@hidden> wrote:
> * Juan Quintela (address@hidden) wrote:
>> We now send several pages at a time each time that we wakeup a thread.
>> 
>> Signed-off-by: Juan Quintela <address@hidden>
>> 
>> --
>> 
>> Use iovec's insead of creating the equivalent.
>> ---
>>  migration/ram.c | 46 ++++++++++++++++++++++++++++++++++++++++------
>>  1 file changed, 40 insertions(+), 6 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 2bf3fa7..90e1bcb 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -362,6 +362,13 @@ static void compress_threads_save_setup(void)
>>  
>>  /* Multiple fd's */
>>  
>> +
>> +typedef struct {
>> +    int num;
>> +    int size;
>
> size_t ?

Done.
>
>> +    struct iovec *iov;
>> +} multifd_pages_t;
>> +
>>  struct MultiFDSendParams {
>>      /* not changed */
>>      uint8_t id;
>> @@ -371,7 +378,7 @@ struct MultiFDSendParams {
>>      QemuMutex mutex;
>>      /* protected by param mutex */
>>      bool quit;
>> -    uint8_t *address;
>> +    multifd_pages_t pages;
>>      /* protected by multifd mutex */
>>      bool done;
>>  };
>> @@ -459,8 +466,8 @@ static void *multifd_send_thread(void *opaque)
>>              qemu_mutex_unlock(&p->mutex);
>>              break;
>>          }
>> -        if (p->address) {
>> -            p->address = 0;
>> +        if (p->pages.num) {
>> +            p->pages.num = 0;
>>              qemu_mutex_unlock(&p->mutex);
>>              qemu_mutex_lock(&multifd_send_state->mutex);
>>              p->done = true;
>> @@ -475,6 +482,13 @@ static void *multifd_send_thread(void *opaque)
>>      return NULL;
>>  }
>>  
>> +static void multifd_init_group(multifd_pages_t *pages)
>> +{
>> +    pages->num = 0;
>> +    pages->size = migrate_multifd_group();
>> +    pages->iov = g_malloc0(pages->size * sizeof(struct iovec));
>
> Does that get freed anywhere?

Ooops.  Now it does.

>> +}
>> +
>>  int multifd_save_setup(void)
>>  {
>>      int thread_count;
>> @@ -498,7 +512,7 @@ int multifd_save_setup(void)
>>          p->quit = false;
>>          p->id = i;
>>          p->done = true;
>> -        p->address = 0;
>> +        multifd_init_group(&p->pages);
>>          p->c = socket_send_channel_create();
>>          if (!p->c) {
>>              error_report("Error creating a send channel");
>> @@ -515,8 +529,23 @@ int multifd_save_setup(void)
>>  
>>  static int multifd_send_page(uint8_t *address)
>>  {
>> -    int i;
>> +    int i, j;
>>      MultiFDSendParams *p = NULL; /* make happy gcc */
>> +    static multifd_pages_t pages;
>> +    static bool once;
>> +
>> +    if (!once) {
>> +        multifd_init_group(&pages);
>> +        once = true;
>> +    }
>> +
>> +    pages.iov[pages.num].iov_base = address;
>> +    pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE;
>> +    pages.num++;
>> +
>> +    if (pages.num < (pages.size - 1)) {
>> +        return UINT16_MAX;
>
> That's a very odd magic constant to return.
> What's your intention?
>
>> +    }
>>  
>>      qemu_sem_wait(&multifd_send_state->sem);
>>      qemu_mutex_lock(&multifd_send_state->mutex);
>> @@ -530,7 +559,12 @@ static int multifd_send_page(uint8_t *address)
>>      }
>>      qemu_mutex_unlock(&multifd_send_state->mutex);
>>      qemu_mutex_lock(&p->mutex);
>> -    p->address = address;
>> +    p->pages.num = pages.num;
>> +    for (j = 0; j < pages.size; j++) {
>> +        p->pages.iov[j].iov_base = pages.iov[j].iov_base;
>> +        p->pages.iov[j].iov_len = pages.iov[j].iov_len;
>> +    }
>
> It would seem more logical to update p->pages.num last
>
> This is also a little odd in that iov_len is never really used,
> it's always TARGET_PAGE_SIZE.

changed by peter suggestion to iov_copy().  And iov_len is used in the
qio send functions, so we have a right value.

>> +    pages.num = 0;
>>      qemu_mutex_unlock(&p->mutex);
>>      qemu_sem_post(&p->sem);
>
> What makes sure that any final chunk of pages that was less
> than the group size is sent at the end?

See last_page boolean in a following patch.  It was the wrong place.

Thanks, Juan.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]