qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 2/2] vhost user: Add RARP injection for legac


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH v3 2/2] vhost user: Add RARP injection for legacy guest
Date: Wed, 24 Jun 2015 13:05:09 +0200

On Wed, Jun 24, 2015 at 04:31:15PM +0800, Jason Wang wrote:
> 
> 
> On 06/23/2015 01:49 PM, Michael S. Tsirkin wrote:
> > On Tue, Jun 23, 2015 at 10:12:17AM +0800, Jason Wang wrote:
> >> > 
> >> > 
> >> > On 06/18/2015 11:16 PM, Thibaut Collet wrote:
> >>> > > On Tue, Jun 16, 2015 at 10:05 AM, Jason Wang <address@hidden> wrote:
> >>>> > >>
> >>>> > >> On 06/16/2015 03:24 PM, Thibaut Collet wrote:
> >>>>> > >>> If my understanding is correct, on a resume operation, we have the
> >>>>> > >>> following callback trace:
> >>>>> > >>> 1. virtio_pci_restore function that calls all restore call back of
> >>>>> > >>> virtio devices
> >>>>> > >>> 2. virtnet_restore that calls try_fill_recv function for each 
> >>>>> > >>> virtual queues
> >>>>> > >>> 3. try_fill_recv function kicks the virtual queue (through
> >>>>> > >>> virtqueue_kick function)
> >>>> > >> Yes, but this happens only after pm resume not migration. Migration 
> >>>> > >> is
> >>>> > >> totally transparent to guest.
> >>>> > >>
> >>> > > Hi Jason,
> >>> > >
> >>> > > After a deeper look in the migration code of QEMU a resume event is
> >>> > > always sent when the live migration is finished.
> >>> > > On a live migration we have the following callback trace:
> >>> > > 1. The VM on the new host is set to the state RUN_STATE_INMIGRATE, the
> >>> > > autostart boolean to 1  and calls the qemu_start_incoming_migration
> >>> > > function (see function main of vl.c)
> >>> > > .....
> >>> > > 2. call of process_incoming_migration function in
> >>> > > migration/migration.c file whatever the way to do the live migration
> >>> > > (tcp:, fd:, unix:, exec: ...)
> >>> > > 3. call of process_incoming_migration_co function in 
> >>> > > migration/migration.c
> >>> > > 4. call of vm_start function in vl.c (otherwise the migrated VM stay
> >>> > > in the pause state, the autostart boolean is set to 1 by the main
> >>> > > function in vl.c)
> >>> > > 5. call of vm_start function that sets the VM is the 
> >>> > > RUN_STATE_RUNNING state.
> >>> > > 6. call of qapi_event_send_resume function that ends a resume event 
> >>> > > to the VM
> >> > 
> >> > AFAIK, this function sends resume event to qemu monitor not VM.
> >> > 
> >>> > >
> >>> > > So when a live migration is ended:
> >>> > > 1. a resume event is sent to the guest
> >>> > > 2. On the reception of this resume event the virtual queue are kicked
> >>> > > by the guest
> >>> > > 3. Backend vhost user catches this kick and can emit a RARP to guest
> >>> > > that does not support GUEST_ANNOUNCE
> >>> > >
> >>> > > This solution, as solution based on detection of DRIVER_OK status
> >>> > > suggested by Michael, allows backend to send the RARP to legacy guest
> >>> > > without involving QEMU and add ioctl to vhost-user.
> >> > 
> >> > A question here is did vhost-user code pass status to the backend? If
> >> > not, how can userspace backend detect DRIVER_OK?
> > Sorry, I must have been unclear.
> > vhost core calls VHOST_NET_SET_BACKEND on DRIVER_OK.
> > Unfortunately vhost user currently translates it to VHOST_USER_NONE.
> 
> Looks like VHOST_NET_SET_BACKEND was only used for tap backend.
> 
> > As a work around, I think kicking ioeventfds once you get
> > VHOST_NET_SET_BACKEND will work.
> 
> Maybe just a eventfd_set() in vhost_net_start(). But is this
> "workaround" elegant enough to be documented? Is it better to do this
> explicitly with a new feature?

If you are going to do this anyway, there are a couple of other changes
we should do, in particular, decide what we want to do with control vq.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]