qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/4] vdpa: handle VIRTIO_NET_CTRL_ANNOUNCE in vhost_vdpa_n


From: Jason Wang
Subject: Re: [PATCH v2 3/4] vdpa: handle VIRTIO_NET_CTRL_ANNOUNCE in vhost_vdpa_net_handle_ctrl_avail
Date: Mon, 5 Dec 2022 12:27:15 +0800

On Thu, Dec 1, 2022 at 5:29 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Thu, Dec 1, 2022 at 9:39 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Wed, Nov 30, 2022 at 3:07 PM Eugenio Perez Martin
> > <eperezma@redhat.com> wrote:
> > >
> > > On Wed, Nov 30, 2022 at 8:02 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > > On Fri, Nov 25, 2022 at 1:33 AM Eugenio Pérez <eperezma@redhat.com> 
> > > > wrote:
> > > > >
> > > > > Since this capability is emulated by qemu shadowed CVQ cannot forward 
> > > > > it
> > > > > to the device. Process all that command within qemu.
> > > > >
> > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > > ---
> > > > >  net/vhost-vdpa.c | 15 ++++++++++++---
> > > > >  1 file changed, 12 insertions(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > > index 2b4b85d8f8..8172aa8449 100644
> > > > > --- a/net/vhost-vdpa.c
> > > > > +++ b/net/vhost-vdpa.c
> > > > > @@ -489,9 +489,18 @@ static int 
> > > > > vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> > > > >      out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
> > > > >                               s->cvq_cmd_out_buffer,
> > > > >                               vhost_vdpa_net_cvq_cmd_len());
> > > > > -    dev_written = vhost_vdpa_net_cvq_add(s, out.iov_len, 
> > > > > sizeof(status));
> > > > > -    if (unlikely(dev_written < 0)) {
> > > > > -        goto out;
> > > > > +    if (*(uint8_t *)s->cvq_cmd_out_buffer == 
> > > > > VIRTIO_NET_CTRL_ANNOUNCE) {
> > > > > +        /*
> > > > > +         * Guest announce capability is emulated by qemu, so dont 
> > > > > forward to
> > > >
> > > > s/dont/don't/
> > > >
> > >
> > > I'll correct it, thanks!
> > >
> > > > > +         * the device.
> > > > > +         */
> > > > > +        dev_written = sizeof(status);
> > > > > +        *s->status = VIRTIO_NET_OK;
> > > >
> > > > I wonder if we should avoid negotiating ANNOUNCE with vDPA parents if
> > > > we do this?
> > > >
> > >
> > > I can re-check, but the next patch should avoid it.
> >
> > Kind of, it makes sure guest can always see _F_ANNOUNCE. But does it
> > prevent _F_ANNOUNCE from being negotiated?
> >
>
> It should go like:
> * vhost_net_ack_features calls vhost_ack_features with feature_bits =
> vdpa_feature_bits and features = guest acked features.
> vhost_ack_features stores in hdev->acked_features only the features
> that met features & bit_mask, so it will not store _F_ANNOUNCE.
> * vhost_vdpa_set_features is called from vhost_dev_set_features with
> features = dev->acked_features. Both functions can add features by
> themselves (VHOST_F_LOG_ALL, VIRTIO_F_IOMMU_PLATFORM), but no
> _F_ANNOUNCE.
>
> Still untested.

Ok.

>
> > > Even if
> > > negotiated, the parent should never set the announce status bit, since
> > > we never tell the device is a destination device.
> >
> > That's the point, do we have such a guarantee? Or I wonder if there's
> > any parent that supports _F_ANNOUNCE if yes, how it is supposed to
> > work?
> >
>
> At the moment it is impossible to work since there is no support for
> config interrupt from the device. Even with config interrupt,
> something external from qemu should make the device enable the status
> bit, since the current migration protocol makes no difference between
> to be a migration destination and to start the device from scratch.
> Unless it enables the bit maliciously or by mistake.
>
> Just for completion, the current method works with no need of vdpa
> device config interrupt support thanks to being 100% emulated in qemu,
> which has the support of injecting config interrupts.

Ok, rethink this feature, I think I can find one use case for
_F_ANNOUNCE, that is, the migration is totally done through the vDPA
device (DPU) itself.

I think we can go forward and revisit this issue in the future.

Thanks

>
> Thanks!
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]