[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 10/17] migration: create ram_multifd_page
From: |
Juan Quintela |
Subject: |
Re: [Qemu-devel] [PATCH 10/17] migration: create ram_multifd_page |
Date: |
Mon, 13 Feb 2017 17:36:03 +0100 |
User-agent: |
Gnus/5.13 (Gnus v5.13) Emacs/25.1 (gnu/linux) |
"Dr. David Alan Gilbert" <address@hidden> wrote:
> * Juan Quintela (address@hidden) wrote:
>> The function still don't use multifd, but we have simplified
>> ram_save_page, xbzrle and RDMA stuff is gone. We have added a new
>> counter and a new flag for this type of pages.
>> +static int ram_multifd_page(QEMUFile *f, PageSearchStatus *pss,
>> + bool last_stage, uint64_t *bytes_transferred)
>> +{
>> + int pages;
>> + uint8_t *p;
>> + RAMBlock *block = pss->block;
>> + ram_addr_t offset = pss->offset;
>> +
>> + p = block->host + offset;
>> +
>> + if (block == last_sent_block) {
>> + offset |= RAM_SAVE_FLAG_CONTINUE;
>> + }
>> + pages = save_zero_page(f, block, offset, p, bytes_transferred);
>> + if (pages == -1) {
>> + *bytes_transferred +=
>> + save_page_header(f, block, offset | RAM_SAVE_FLAG_MULTIFD_PAGE);
>> + qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
>> + *bytes_transferred += TARGET_PAGE_SIZE;
>> + pages = 1;
>> + acct_info.norm_pages++;
>> + acct_info.multifd_pages++;
>> + }
>> +
>> + return pages;
>> +}
>> +
>> static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
>> ram_addr_t offset)
>> {
>> @@ -1427,6 +1461,8 @@ static int ram_save_target_page(MigrationState *ms,
>> QEMUFile *f,
>> res = ram_save_compressed_page(f, pss,
>> last_stage,
>> bytes_transferred);
>> + } else if (migrate_use_multifd()) {
>> + res = ram_multifd_page(f, pss, last_stage, bytes_transferred);
>
> I'm curious whether it's best to pick the destination fd at this level or one
> level
> higher; for example would it be good to keep all the components of a
> host page or huge
> page together on the same fd? If so then it would be best to pick the fd
> at ram_save_host_page level.
my plan here would be to change the migration code to be able to call
with a bigger sizes, not page by page, and then the problem is solved by
itself?
Later, Juan.