qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 2/3] migration: use the free page reporting f


From: Wang, Wei W
Subject: Re: [Qemu-devel] [PATCH v2 2/3] migration: use the free page reporting feature from balloon
Date: Mon, 26 Feb 2018 09:22:59 +0000

On Monday, February 26, 2018 1:07 PM, Wei Wang wrote:
> On 02/09/2018 07:50 PM, Dr. David Alan Gilbert wrote:
> > * Wei Wang (address@hidden) wrote:
> >> Use the free page reporting feature from the balloon device to clear
> >> the bits corresponding to guest free pages from the dirty bitmap, so
> >> that the free memory are not sent.
> >>
> >> Signed-off-by: Wei Wang <address@hidden>
> >> CC: Michael S. Tsirkin <address@hidden>
> >> CC: Juan Quintela <address@hidden>
> >> ---
> >>   migration/ram.c | 24 ++++++++++++++++++++----
> >>   1 file changed, 20 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/migration/ram.c b/migration/ram.c index d6f462c..4fe16d2
> >> 100644
> >> --- a/migration/ram.c
> >> +++ b/migration/ram.c
> >> @@ -49,6 +49,7 @@
> >>   #include "qemu/rcu_queue.h"
> >>   #include "migration/colo.h"
> >>   #include "migration/block.h"
> >> +#include "sysemu/balloon.h"
> >>
> >>   /***********************************************************/
> >>   /* ram save/restore */
> >> @@ -206,6 +207,10 @@ struct RAMState {
> >>       uint32_t last_version;
> >>       /* We are in the first round */
> >>       bool ram_bulk_stage;
> >> +    /* The feature, skipping the transfer of free pages, is supported */
> >> +    bool free_page_support;
> >> +    /* Skip the transfer of free pages in the bulk stage */
> >> +    bool free_page_done;
> >>       /* How many times we have dirty too many pages */
> >>       int dirty_rate_high_cnt;
> >>       /* these variables are used for bitmap sync */ @@ -773,7 +778,7
> >> @@ unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock
> *rb,
> >>       unsigned long *bitmap = rb->bmap;
> >>       unsigned long next;
> >>
> >> -    if (rs->ram_bulk_stage && start > 0) {
> >> +    if (rs->ram_bulk_stage && start > 0 && !rs->free_page_support) {
> >>           next = start + 1;
> >>       } else {
> >>           next = find_next_bit(bitmap, size, start); @@ -1653,6
> >> +1658,8 @@ static void ram_state_reset(RAMState *rs)
> >>       rs->last_page = 0;
> >>       rs->last_version = ram_list.version;
> >>       rs->ram_bulk_stage = true;
> >> +    rs->free_page_support = balloon_free_page_support();
> >> +    rs->free_page_done = false;
> >>   }
> >>
> >>   #define MAX_WAIT 50 /* ms, half buffered_file limit */ @@ -2135,7
> >> +2142,7 @@ static int ram_state_init(RAMState **rsp)
> >>       return 0;
> >>   }
> >>
> >> -static void ram_list_init_bitmaps(void)
> >> +static void ram_list_init_bitmaps(RAMState *rs)
> >>   {
> >>       RAMBlock *block;
> >>       unsigned long pages;
> >> @@ -2145,7 +2152,11 @@ static void ram_list_init_bitmaps(void)
> >>           QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> >>               pages = block->max_length >> TARGET_PAGE_BITS;
> >>               block->bmap = bitmap_new(pages);
> >> -            bitmap_set(block->bmap, 0, pages);
> >> +            if (rs->free_page_support) {
> >> +                bitmap_set(block->bmap, 1, pages);
> > I don't understand how it makes sense to do that here; ignoring
> > anything ese it means that migration_dirty_pages is wrong which could
> > end up with migration finishing before all real pages are sent.
> >
> 
> The bulk stage treats all the pages as dirty pages, so we set all the bits to 
> "1",
> this is needed by this optimization feature, because the free pages reported
> from the guest can then be directly cleared from the bitmap (we don't need
> any more bitmaps to record free pages).
> 

Sorry, there was a misunderstanding of the bitmap_set API (thought it was used 
to set all the bits to 1 or 0). So the above change isn't needed actually. Btw, 
this doesn't affect the results I reported.

Best,
Wei
 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]