qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState
Date: Fri, 31 Mar 2017 11:04:19 +0800
User-agent: Mutt/1.5.24 (2015-08-30)

On Thu, Mar 30, 2017 at 06:30:36PM +0200, Juan Quintela wrote:
> Peter Xu <address@hidden> wrote:
> > On Thu, Mar 23, 2017 at 09:45:30PM +0100, Juan Quintela wrote:
> >> Rename it to preffer_xbzrle that is a more descriptive name.
> >
> > s/preffer/prefer/?
> >        
> >> 
> >> Signed-off-by: Juan Quintela <address@hidden>
> >> ---
> >>  migration/ram.c | 9 +++++----
> >>  1 file changed, 5 insertions(+), 4 deletions(-)
> >> 
> >> diff --git a/migration/ram.c b/migration/ram.c
> >> index 6a39704..591cf89 100644
> >> --- a/migration/ram.c
> >> +++ b/migration/ram.c
> >> @@ -217,6 +217,9 @@ struct RAMState {
> >>      uint64_t dirty_pages_rate;
> >>      /* Count of requests incoming from destination */
> >>      uint64_t postcopy_requests;
> >> +    /* Should we move to xbzrle after the 1st round
> >> +       of compression */
> >> +    bool preffer_xbzrle;
> >>      /* protects modification of the bitmap */
> >>      QemuMutex bitmap_mutex;
> >>      /* Ram Bitmap protected by RCU */
> >> @@ -335,7 +338,6 @@ static QemuCond comp_done_cond;
> >>  /* The empty QEMUFileOps will be used by file in CompressParam */
> >>  static const QEMUFileOps empty_ops = { };
> >>  
> >> -static bool compression_switch;
> >>  static DecompressParam *decomp_param;
> >>  static QemuThread *decompress_threads;
> >>  static QemuMutex decomp_done_lock;
> >> @@ -419,7 +421,6 @@ void migrate_compress_threads_create(void)
> >>      if (!migrate_use_compression()) {
> >>          return;
> >>      }
> >> -    compression_switch = true;
> >>      thread_count = migrate_compress_threads();
> >>      compress_threads = g_new0(QemuThread, thread_count);
> >>      comp_param = g_new0(CompressParam, thread_count);
> >> @@ -1091,7 +1092,7 @@ static bool find_dirty_block(RAMState *rs, 
> >> PageSearchStatus *pss,
> >>                   * point. In theory, xbzrle can do better than 
> >> compression.
> >>                   */
> >>                  flush_compressed_data(rs);
> >> -                compression_switch = false;
> >> +                rs->preffer_xbzrle = true;
> >>              }
> >>          }
> >>          /* Didn't find anything this time, but try again on the new block 
> >> */
> >> @@ -1323,7 +1324,7 @@ static int ram_save_target_page(RAMState *rs, 
> >> MigrationState *ms,
> >>      /* Check the pages is dirty and if it is send it */
> >>      if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
> >>          unsigned long *unsentmap;
> >> -        if (compression_switch && migrate_use_compression()) {
> >> +        if (!rs->preffer_xbzrle && migrate_use_compression()) {
> >
> > IIUC this prefer_xbzrle can be dynamically calculated by existing
> > states:
> >
> > static inline bool ram_compression_active(RAMState *rs)
> > {
> >     /*
> >      * If xbzrle is on, stop using the data compression after first
> >      * round of migration even if compression is enabled. In theory,
> >      * xbzrle can do better than compression.
> >      */
> >     return migrate_use_compression() && \
> >            (rs->ram_bulk_stage || !migrate_use_xbzrle());
> > }
> >
> > Then this line can be written as:
> >
> >     if (ram_compression_active(rs)) {
> >
> > And if so, we can get rid of prefer_xbzrle, right?
> >
> > Having prefer_xbzrle should be slightly faster though, since it'll at
> > least cache the above calculation.
> >
> > Thanks,
> 
> You arrived to the same conclusion than Dave.  As it was only used once,
> I didn't create the extra function.

I would still slightly prefer an extra function (of course I think
it'll be definitely inlined) for readability, or in case we'll use it
somewhere else in the future, then no need to think it twice.

But I'm okay without it as well. :)

> 
> I stole your comment verbatim O:-)

My pleasure!

-- peterx



reply via email to

[Prev in Thread] Current Thread [Next in Thread]