qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/2] Vring: vring's listener's priority should h


From: liu ping fan
Subject: Re: [Qemu-devel] [PATCH 1/2] Vring: vring's listener's priority should higher than kvm
Date: Fri, 10 May 2013 17:04:43 +0800

On Fri, May 10, 2013 at 3:12 PM, Stefan Hajnoczi <address@hidden> wrote:
> On Fri, May 10, 2013 at 02:03:34PM +0800, liu ping fan wrote:
>> On Thu, May 9, 2013 at 11:26 PM, Stefan Hajnoczi <address@hidden> wrote:
>> > On Thu, May 09, 2013 at 05:00:20PM +0800, liu ping fan wrote:
>> >> On Thu, May 9, 2013 at 4:30 PM, Stefan Hajnoczi <address@hidden> wrote:
>> >> > On Thu, May 09, 2013 at 08:40:21AM +0800, Liu Ping Fan wrote:
>> >> >> From: Liu Ping Fan <address@hidden>
>> >> >>
>> >> >> Hosts threads which handle vring should have high MemoryListener 
>> >> >> priority
>> >> >> than kvm. For currently code, take the following scenario:
>> >> >>   kvm_region_add() run earlier before vhost_region_add(), then in 
>> >> >> guest,
>> >> >> vring's desc[i] can refer to addressX in the new region known by guest.
>> >> >> But vhost does not know this new region yet, and the vring handler will
>> >> >> fail.
>> >> >
>> >> > Is there a concrete scenario where this happens?
>> >> >
>> >> > I can think of situations like the ioeventfd being readable before
>> >> > vhost/hostmem is populated.  But I don't see how that's related to the
>> >> > priority of kvm_region_add().
>> >> >
>> >> For kvm, ie, In guest, vring_desc.addr can point to a chunk of data in
>> >> the new added memory, and kick vhost. The vhost has not added this new
>> >> region, so its local lookup table can not translate this new address,
>> >> and vring handler will fail.  If vhost priority is higher than kvm,
>> >> then, it will know this new address earlier than kvm.
>> >
>> > Isn't the real solution to ensure that the memory API is up-to-date
>> > before we notify the guest of memory hotplug?
>> >
>> No, it is not.
>>
>> > I still don't see a kvm vs vhost race.  I see a guest vs vhost race
>> > which priority doesn't fix.
>> >
>> Yes, you are right.
>> The priority should be vhost > guest, and kvm > guest. So vhost == kvm
>> is OK. But can it be higher or why chosen as 10 not zero?
>>
>> If the dependency only lies between MemoryListeners and guest, not
>> between listeners, then is the priority meanless?  I think we should
>> make sure about this, because if converting core listener to rcu
>> style, we will definitely break the sequence of region_add/del, ie
>> both add&del comes after kvm.
>
> Okay, so now we're left with the question "what are the ordering
> dependencies between memory listeners?".
>
> I poked around with git-blame(1) but didn't find an explanation.  The
> best I can come up with is that the core listeners in exec.c update
> QEMU's guest RAM and I/O port mappings, kvm/vhost/xen should be able to
> query them.  Therefore exec.c listeners have priority 0 or 1.
>
I think
  1. vhost has not relation with exec.c listeners, right?
  2. kvm has relation with exec.c listeners, but as discussed, the
real dependency is between guest and exec.c listeners.
      So kvm just need to get ready before guest, and this is not
guarded by "priority"
  3. xen ? I do not know, but I guess it is like kvm.

So can we ignore the priority? It seems redundant since we rely on
other mechenism.
> BTW the commit that introduced priorities is:
>
>   commit 72e22d2fe17b85e56b4f0c437c61c6e2de97b308
>   Author: Avi Kivity <address@hidden>
>   Date:   Wed Feb 8 15:05:50 2012 +0200
>
>       memory: switch memory listeners to a QTAILQ
>
> Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]