qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 2/2] vhost user: Add RARP injection for legac


From: Thibaut Collet
Subject: Re: [Qemu-devel] [PATCH v3 2/2] vhost user: Add RARP injection for legacy guest
Date: Tue, 16 Jun 2015 10:16:32 +0200

For a live migration my understanding is there are a suspend resume operation:
- The VM image is regularly copied from the old host to the new one
(modified pages due to VM operation can be copied several time)
- As soon as there are only few pages to copy the VM is suspended on
the old host, the last pages are copied, and the VM is resumed on the
new host. Migration is not totally transparent  to guest that has a
small period of unavailability.



On Tue, Jun 16, 2015 at 10:05 AM, Jason Wang <address@hidden> wrote:
>
>
> On 06/16/2015 03:24 PM, Thibaut Collet wrote:
>> If my understanding is correct, on a resume operation, we have the
>> following callback trace:
>> 1. virtio_pci_restore function that calls all restore call back of
>> virtio devices
>> 2. virtnet_restore that calls try_fill_recv function for each virtual queues
>> 3. try_fill_recv function kicks the virtual queue (through
>> virtqueue_kick function)
>
> Yes, but this happens only after pm resume not migration. Migration is
> totally transparent to guest.
>
>>
>>
>> On Tue, Jun 16, 2015 at 7:29 AM, Jason Wang <address@hidden> wrote:
>>>
>>> On 06/15/2015 08:12 PM, Thibaut Collet wrote:
>>>> After a resume operation the guest always kicks the backend for each
>>>> virtual queues.
>>>> A live migration does a suspend operation on the old host and a resume
>>>> operation on the new host. So the backend has a kick after migration.
>>>>
>>>> I have checked this point with a legacy guest (redhat 6-5 with kernel
>>>> version 2.6.32-431.29.2) and the kick occurs after migration or
>>>> resume.
>>>>
>>>> Jason have you an example of legacy guest that will not kick the
>>>> virtual queue after a resume ?
>>> I must miss something but migration should be transparent to guest.
>>> Could you show me the code that guest does the kick after migration?
>>>
>>>> On Mon, Jun 15, 2015 at 10:44 AM, Michael S. Tsirkin <address@hidden> 
>>>> wrote:
>>>>> On Mon, Jun 15, 2015 at 03:43:13PM +0800, Jason Wang wrote:
>>>>>> On 06/12/2015 10:28 PM, Michael S. Tsirkin wrote:
>>>>>>> On Fri, Jun 12, 2015 at 03:55:33PM +0800, Jason Wang wrote:
>>>>>>>> On 06/11/2015 08:13 PM, Michael S. Tsirkin wrote:
>>>>>>>>> On Thu, Jun 11, 2015 at 02:10:48PM +0200, Thibaut Collet wrote:
>>>>>>>>>> I am not sure to understand your remark:
>>>>>>>>>>
>>>>>>>>>>> It needs to be sent when backend is activated by guest kick
>>>>>>>>>>> (in case of virtio 1, it's possible to use DRIVER_OK for this).
>>>>>>>>>>> This does not happen when VM still runs on source.
>>>>>>>>>> Could you confirm rarp can be sent by backend when the
>>>>>>>>>> VHOST_USER_SET_VRING_KICK message is received by the backend ?
>>>>>>>>> No - the time to send pakets is when you start processing
>>>>>>>>> the rings.
>>>>>>>>>
>>>>>>>>> And the time to do that is when you detect a kick on
>>>>>>>>> an eventfd, not when said fd is set.
>>>>>>>>>
>>>>>>>> Probably not. What if guest is only doing receiving?
>>>>>>> Clarification: the kick can be on any VQs.
>>>>>>> In your example, guest kicks after adding receive buffers.
>>>>>> Yes, but refill only happens on we are lacking of receive buffers. It is
>>>>>> not guaranteed to happen just after migration, we may have still have
>>>>>> enough rx buffers for device to receive.
>>>>> I think we also kick the backend after migration, do we not?
>>>>> Further, DRIVER_OK can be used as a signal to start backend too.
>>>>>
>>>>>>>> In this case, you
>>>>>>>> won't detect any kick if you don't send the rarp first.
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]