qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v8 03/21] vdpa: control virtqueue support on shadow virtq


From: Eugenio Perez Martin
Subject: Re: [RFC PATCH v8 03/21] vdpa: control virtqueue support on shadow virtqueue
Date: Wed, 8 Jun 2022 18:38:20 +0200

On Tue, Jun 7, 2022 at 8:05 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/5/20 03:12, Eugenio Pérez 写道:
> > Introduce the control virtqueue support for vDPA shadow virtqueue. This
> > is needed for advanced networking features like multiqueue.
> >
> > To demonstrate command handling, VIRTIO_NET_F_CTRL_MACADDR and
> > VIRTIO_NET_CTRL_MQ are implemented. If vDPA device is started with SVQ
> > support and virtio-net driver changes MAC or the number of queues
> > virtio-net device model will be updated with the new one.
> >
> > Others cvq commands could be added here straightforwardly but they have
> > been not tested.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >   net/vhost-vdpa.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 44 insertions(+)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index df1e69ee72..ef12fc284c 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -11,6 +11,7 @@
> >
> >   #include "qemu/osdep.h"
> >   #include "clients.h"
> > +#include "hw/virtio/virtio-net.h"
> >   #include "net/vhost_net.h"
> >   #include "net/vhost-vdpa.h"
> >   #include "hw/virtio/vhost-vdpa.h"
> > @@ -187,6 +188,46 @@ static NetClientInfo net_vhost_vdpa_info = {
> >           .check_peer_type = vhost_vdpa_check_peer_type,
> >   };
> >
> > +static void vhost_vdpa_net_handle_ctrl(VirtIODevice *vdev,
> > +                                       const VirtQueueElement *elem)
> > +{
> > +    struct virtio_net_ctrl_hdr ctrl;
> > +    virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
> > +    size_t s;
> > +    struct iovec in = {
> > +        .iov_base = &status,
> > +        .iov_len = sizeof(status),
> > +    };
> > +
> > +    s = iov_to_buf(elem->out_sg, elem->out_num, 0, &ctrl, 
> > sizeof(ctrl.class));
> > +    if (s != sizeof(ctrl.class)) {
> > +        return;
> > +    }
> > +
> > +    switch (ctrl.class) {
> > +    case VIRTIO_NET_CTRL_MAC_ADDR_SET:
> > +    case VIRTIO_NET_CTRL_MQ:
> > +        break;
> > +    default:
> > +        return;
> > +    };
>
>
> I think we can probably remove the whitelist here since it is expected
> to work for any kind of command?
>

SVQ is expected to inject virtio device status at startup
(specifically, at live migration destination startup). This code is
specific per command.

Thanks!

> Thanks
>
>
> > +
> > +    s = iov_to_buf(elem->in_sg, elem->in_num, 0, &status, sizeof(status));
> > +    if (s != sizeof(status) || status != VIRTIO_NET_OK) {
> > +        return;
> > +    }
> > +
> > +    status = VIRTIO_NET_ERR;
> > +    virtio_net_handle_ctrl_iov(vdev, &in, 1, elem->out_sg, elem->out_num);
> > +    if (status != VIRTIO_NET_OK) {
> > +        error_report("Bad CVQ processing in model");
> > +    }
> > +}
> > +
> > +static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
> > +    .used_elem_handler = vhost_vdpa_net_handle_ctrl,
> > +};
> > +
> >   static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> >                                              const char *device,
> >                                              const char *name,
> > @@ -211,6 +252,9 @@ static NetClientState 
> > *net_vhost_vdpa_init(NetClientState *peer,
> >
> >       s->vhost_vdpa.device_fd = vdpa_device_fd;
> >       s->vhost_vdpa.index = queue_pair_index;
> > +    if (!is_datapath) {
> > +        s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
> > +    }
> >       ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, 
> > nvqs);
> >       if (ret) {
> >           qemu_del_net_client(nc);
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]