qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/8] migration: stop compressing page in migrati


From: Xiao Guangrong
Subject: Re: [Qemu-devel] [PATCH 1/8] migration: stop compressing page in migration thread
Date: Tue, 27 Mar 2018 23:24:12 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0



On 03/28/2018 11:01 AM, Wang, Wei W wrote:
On Tuesday, March 13, 2018 3:58 PM, Xiao Guangrong wrote:

As compression is a heavy work, do not do it in migration thread, instead, we
post it out as a normal page

Signed-off-by: Xiao Guangrong <address@hidden>


Hi Guangrong,

Dave asked me to help review your patch, so I will just drop my 2 cents 
wherever possible, and hope that could be inspiring for your work.

Thank you both for the nice help on the work. :)



---
  migration/ram.c | 32 ++++++++++++++++----------------
  1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c index
7266351fd0..615693f180 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1132,7 +1132,7 @@ static int ram_save_compressed_page(RAMState
*rs, PageSearchStatus *pss,
      int pages = -1;
      uint64_t bytes_xmit = 0;
      uint8_t *p;
-    int ret, blen;
+    int ret;
      RAMBlock *block = pss->block;
      ram_addr_t offset = pss->page << TARGET_PAGE_BITS;

@@ -1162,23 +1162,23 @@ static int
ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
          if (block != rs->last_sent_block) {
              flush_compressed_data(rs);
              pages = save_zero_page(rs, block, offset);
-            if (pages == -1) {
-                /* Make sure the first page is sent out before other pages */
-                bytes_xmit = save_page_header(rs, rs->f, block, offset |
-                                              RAM_SAVE_FLAG_COMPRESS_PAGE);
-                blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
-                                                 migrate_compress_level());
-                if (blen > 0) {
-                    ram_counters.transferred += bytes_xmit + blen;
-                    ram_counters.normal++;
-                    pages = 1;
-                } else {
-                    qemu_file_set_error(rs->f, blen);
-                    error_report("compressed data failed!");
-                }
-            }
              if (pages > 0) {
                  ram_release_pages(block->idstr, offset, pages);
+            } else {
+                /*
+                 * Make sure the first page is sent out before other pages.
+                 *
+                 * we post it as normal page as compression will take much
+                 * CPU resource.
+                 */
+                ram_counters.transferred += save_page_header(rs, rs->f, block,
+                                                offset | RAM_SAVE_FLAG_PAGE);
+                qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
+                                      migrate_release_ram() &
+                                      migration_in_postcopy());
+                ram_counters.transferred += TARGET_PAGE_SIZE;
+                ram_counters.normal++;
+                pages = 1;
              }
          } else {
              pages = save_zero_page(rs, block, offset);
--

I agree that this patch is an improvement for the current implementation. So 
just pile up mine here:
Reviewed-by: Wei Wang <address@hidden>

Thanks.



If you are interested in something more aggressive, I can share an alternative 
approach, which I think would be better. Please see below.

Actually, we can use the multi-threaded compression for the first page as well, 
which will not block the migration thread progress. The advantage is that we 
can enjoy the compression benefit for the first page and meanwhile not blocking 
the migration thread - the page is given to a compression thread and compressed 
asynchronously to the migration thread execution.


Yes, it is a good point.

The main barrier to achieving the above that is that we need to make sure the 
first page of each block is sent first in the multi-threaded environment. We 
can twist the current implementation to achieve that, which is not hard:

For example, we can add a new flag to RAMBlock - bool first_page_added. In each 
thread of compression, they need
1) check if this is the first page of the block.
2) If it is the first page, set block->first_page_added after sending the page;
3) If it is not the first the page, wait to send the page only when 
block->first_page_added is set.


So there is another barrier introduced which hurts the parallel...

Hmm, we need more deliberate consideration on this point, let me think it over 
after this work.

Thank you.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]