qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-9.0 v3] vdpa-dev: Fix initialisation order to restore VDU


From: Eugenio Perez Martin
Subject: Re: [PATCH for-9.0 v3] vdpa-dev: Fix initialisation order to restore VDUSE compatibility
Date: Mon, 18 Mar 2024 20:27:31 +0100

On Mon, Mar 18, 2024 at 10:02 AM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Mon, Mar 18, 2024 at 12:31:26PM +0800, Jason Wang wrote:
> > On Fri, Mar 15, 2024 at 11:59 PM Kevin Wolf <kwolf@redhat.com> wrote:
> > >
> > > VDUSE requires that virtqueues are first enabled before the DRIVER_OK
> > > status flag is set; with the current API of the kernel module, it is
> > > impossible to enable the opposite order in our block export code because
> > > userspace is not notified when a virtqueue is enabled.
> > >
> > > This requirement also mathces the normal initialisation order as done by
> > > the generic vhost code in QEMU. However, commit 6c482547 accidentally
> > > changed the order for vdpa-dev and broke access to VDUSE devices with
> > > this.
> > >
> > > This changes vdpa-dev to use the normal order again and use the standard
> > > vhost callback .vhost_set_vring_enable for this. VDUSE devices can be
> > > used with vdpa-dev again after this fix.
> > >
> > > vhost_net intentionally avoided enabling the vrings for vdpa and does
> > > this manually later while it does enable them for other vhost backends.
> > > Reflect this in the vhost_net code and return early for vdpa, so that
> > > the behaviour doesn't change for this device.
> > >
> > > Cc: qemu-stable@nongnu.org
> > > Fixes: 6c4825476a4351530bcac17abab72295b75ffe98
> > > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > > ---
> > > v2:
> > > - Actually make use of the @enable parameter
> > > - Change vhost_net to preserve the current behaviour
> > >
> > > v3:
> > > - Updated trace point [Stefano]
> > > - Fixed typo in comment [Stefano]
> > >
> > >  hw/net/vhost_net.c     | 10 ++++++++++
> > >  hw/virtio/vdpa-dev.c   |  5 +----
> > >  hw/virtio/vhost-vdpa.c | 29 ++++++++++++++++++++++++++---
> > >  hw/virtio/vhost.c      |  8 +++++++-
> > >  hw/virtio/trace-events |  2 +-
> > >  5 files changed, 45 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > > index e8e1661646..fd1a93701a 100644
> > > --- a/hw/net/vhost_net.c
> > > +++ b/hw/net/vhost_net.c
> > > @@ -541,6 +541,16 @@ int vhost_set_vring_enable(NetClientState *nc, int 
> > > enable)
> > >      VHostNetState *net = get_vhost_net(nc);
> > >      const VhostOps *vhost_ops = net->dev.vhost_ops;
> > >
> > > +    /*
> > > +     * vhost-vdpa network devices need to enable dataplane virtqueues 
> > > after
> > > +     * DRIVER_OK, so they can recover device state before starting 
> > > dataplane.
> > > +     * Because of that, we don't enable virtqueues here and leave it to
> > > +     * net/vhost-vdpa.c.
> > > +     */
> > > +    if (nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > > +        return 0;
> > > +    }
> >
> > I think we need some inputs from Eugenio, this is only needed for
> > shadow virtqueue during live migration but not other cases.
> >
> > Thanks
>
>
> Yes I think we had a backend flag for this, right? Eugenio can you
> comment please?
>

We have the VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK backend flag,
right. If the backend does not offer it, it is better to enable all
the queues here and add a migration blocker in net/vhost-vdpa.c.

So the check should be:
nc->info->type == VHOST_VDPA && (backend_features &
VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK).

I can manage to add the migration blocker on top of this patch.

Thanks!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]