qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vdpa: set old virtio status at cvq isolation probing end


From: Jason Wang
Subject: Re: [PATCH] vdpa: set old virtio status at cvq isolation probing end
Date: Mon, 31 Jul 2023 16:42:12 +0800

On Mon, Jul 31, 2023 at 4:05 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Mon, Jul 31, 2023 at 8:36 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Wed, Jul 26, 2023 at 2:27 PM Eugenio Perez Martin
> > <eperezma@redhat.com> wrote:
> > >
> > > On Wed, Jul 26, 2023 at 4:07 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > > On Wed, Jul 26, 2023 at 2:21 AM Eugenio Pérez <eperezma@redhat.com> 
> > > > wrote:
> > > > >
> > > > > The device already has a virtio status set by vhost_vdpa_init by the
> > > > > time vhost_vdpa_probe_cvq_isolation is called. vhost_vdpa_init set
> > > > > S_ACKNOWLEDGE and S_DRIVER, so it is invalid to just reset it.
> > > > >
> > > > > It is invalid to start the device after it, but all devices seems to 
> > > > > be
> > > > > fine with it.  Fixing qemu so it follows virtio start procedure.
> > > > >
> > > > > Fixes: 152128d64697 ("vdpa: move CVQ isolation check to 
> > > > > net_init_vhost_vdpa")
> > > > > Reported-by: Dragos Tatulea <dtatulea@nvidia.com>
> > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > > ---
> > > > >  net/vhost-vdpa.c | 2 ++
> > > > >  1 file changed, 2 insertions(+)
> > > > >
> > > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > > index 9795306742..d7e2b714b4 100644
> > > > > --- a/net/vhost-vdpa.c
> > > > > +++ b/net/vhost-vdpa.c
> > > > > @@ -1333,6 +1333,8 @@ static int vhost_vdpa_probe_cvq_isolation(int 
> > > > > device_fd, uint64_t features,
> > > > >  out:
> > > > >      status = 0;
> > > > >      ioctl(device_fd, VHOST_VDPA_SET_STATUS, &status);
> > > > > +    status = VIRTIO_CONFIG_S_ACKNOWLEDGE | VIRTIO_CONFIG_S_DRIVER;
> > > > > +    ioctl(device_fd, VHOST_VDPA_SET_STATUS, &status);
> > > >
> > > > So if we fail after FEATURES_OK, this basically clears that bit. Spec
> > > > doesn't say it can or not, I wonder if a reset is better?
> > > >
> > >
> > > I don't follow this, the reset is just above the added code, isn't it?
> >
> > I meant for error path:
> >
> > E.g:
> >     uint8_t status = VIRTIO_CONFIG_S_ACKNOWLEDGE |
> >                      VIRTIO_CONFIG_S_DRIVER |
> >                      VIRTIO_CONFIG_S_FEATURES_OK;
> > ...
> >     r = ioctl(device_fd, VHOST_VDPA_SET_STATUS, &status);
> > ....
> >         if (cvq_group != -ENOTSUP) {
> >             r = cvq_group;
> >             goto out;
> >         }
> >
> > out:
> >     status = VIRTIO_CONFIG_S_ACKNOWLEDGE | VIRTIO_CONFIG_S_DRIVER;
> >     ioctl(device_fd, VHOST_VDPA_SET_STATUS, &status);
> >
> > We're basically clearing FEATURES_OK?
> >
>
> Yes, it is the state that previous functions (vhost_vdpa_init) set. We
> need to leave it that way, either if the backend supports cvq
> isolation or not, or in the case of an error. Not doing that way makes
> vhost_dev_start (and vhost_vdpa_set_features) set the features before
> setting VIRTIO_CONFIG_S_ACKNOWLEDGE | VIRTIO_CONFIG_S_DRIVER.
> Otherwise, the guest can (and do) access to config space before
> _S_ACKNOWLEDGE | _S_DRIVER.

I'm not sure if it is supported by the spec or not (I meant clearing
the FEATURES_OK). Or maybe we need a reset here?

Thanks

>
>
> > >
> > > > Btw, spec requires a read of status after setting FEATURES_OK, this
> > > > seems to be missed in the current code.
> > > >
> > >
> > > I'm ok with that, but this patch does not touch that part.
> > >
> > > To fix this properly we should:
> > > - Expose vhost_vdpa_set_dev_features_fd as we did in previous versions
> > > of the series that added vhost_vdpa_probe_cvq_isolation [1].
> > > - Get status after vhost_vdpa_add_status, so both vhost start code and
> > > this follows the standard properly.
> > >
> > > Is it ok to do these on top of this patch?
> >
> > Fine.
> >
> > Thanks
> >
> > >
> > > Thanks!
> > >
> > > [1] 
> > > https://lore.kernel.org/qemu-devel/20230509154435.1410162-4-eperezma@redhat.com/
> > >
> > >
> > > > Thanks
> > > >
> > > > >      return r;
> > > > >  }
> > > > >
> > > > > --
> > > > > 2.39.3
> > > > >
> > > >
> > >
> >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]