qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
Date: Wed, 5 Apr 2017 11:27:50 +0100
User-agent: Mutt/1.8.0 (2017-02-23)

* Peter Xu (address@hidden) wrote:
> On Fri, Mar 31, 2017 at 04:25:56PM +0100, Dr. David Alan Gilbert wrote:
> > * Peter Xu (address@hidden) wrote:
> > > On Thu, Mar 23, 2017 at 09:45:23PM +0100, Juan Quintela wrote:
> > > > This are the last postcopy fields still at MigrationState.  Once there
> > > 
> > > s/This/These/
> > > 
> > > > Move MigrationSrcPageRequest to ram.c and remove MigrationState
> > > > parameters where appropiate.
> > > > 
> > > > Signed-off-by: Juan Quintela <address@hidden>
> > > 
> > > Reviewed-by: Peter Xu <address@hidden>
> > > 
> > > One question below though...
> > > 
> > > [...]
> > > 
> > > > @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, 
> > > > MigrationState *ms,
> > > >   *
> > > >   * It should be empty at the end anyway, but in error cases there may
> > > >   * xbe some left.
> > > > - *
> > > > - * @ms: current migration state
> > > >   */
> > > > -void flush_page_queue(MigrationState *ms)
> > > > +void flush_page_queue(void)
> > > >  {
> > > > -    struct MigrationSrcPageRequest *mspr, *next_mspr;
> > > > +    struct RAMSrcPageRequest *mspr, *next_mspr;
> > > > +    RAMState *rs = &ram_state;
> > > >      /* This queue generally should be empty - but in the case of a 
> > > > failed
> > > >       * migration might have some droppings in.
> > > >       */
> > > >      rcu_read_lock();
> > > 
> > > Could I ask why we are taking the RCU read lock rather than the mutex
> > > here?
> > 
> > It's a good question whether we need anything at all.
> > flush_page_queue is called only from migrate_fd_cleanup.
> > migrate_fd_cleanup is called either from a backhalf, which I think has the 
> > bql,
> > or from a failure path in migrate_fd_connect.
> > migrate_fd_connect is called from migration_channel_connect and 
> > rdma_start_outgoing_migration
> > which I think both end up at monitor commands so also in the bql.
> > 
> > So I think we can probably just lose the rcu_read_lock/unlock.
> 
> Thanks for the confirmation.
> 
> (ps: even if we are not with bql, we should not need this
>  rcu_read_lock, right? My understanding is: if we want to protect
>  src_page_requests, we should need the mutex, not rcu lock; while for
>  the memory_region_unref() since we have had the reference, looks like
>  we don't need any kind of locking either)

Right; I guess the memory_region_unref might cause the memory region
to be cleanup up in that loop without the rcu locks, but I don't think
it's a problem even if they are cleaned up.

Dave

> > 
> > Dave
> > 
> > > 
> > > > -    QSIMPLEQ_FOREACH_SAFE(mspr, &ms->src_page_requests, next_req, 
> > > > next_mspr) {
> > > > +    QSIMPLEQ_FOREACH_SAFE(mspr, &rs->src_page_requests, next_req, 
> > > > next_mspr) {
> > > >          memory_region_unref(mspr->rb->mr);
> > > > -        QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
> > > > +        QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
> > > >          g_free(mspr);
> > > >      }
> > > >      rcu_read_unlock();
> > > 
> > > Thanks,
> > > 
> > > -- peterx
> > --
> > Dr. David Alan Gilbert / address@hidden / Manchester, UK
> 
> -- peterx
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]