qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements


From: Eugenio Perez Martin
Subject: Re: [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements
Date: Mon, 29 Jul 2024 10:21:27 +0200

On Sun, Jul 28, 2024 at 7:37 PM Sahil <icegambit91@gmail.com> wrote:
>
> Hi,
>
> On Friday, July 26, 2024 7:18:28 PM GMT+5:30 Eugenio Perez Martin wrote:
> > On Fri, Jul 26, 2024 at 11:58 AM Sahil Siddiq <icegambit91@gmail.com> wrote:
> > > This is the first patch in a series to add support for packed
> > > virtqueues in vhost_shadow_virtqueue. This patch implements the
> > > insertion of available buffers in the descriptor area. It takes
> > > into account descriptor chains, but does not consider indirect
> > > descriptors.
> > >
> > > Signed-off-by: Sahil Siddiq <sahilcdq@proton.me>
> > > ---
> > > Changes v1 -> v2:
> > > * Split commit from RFC v1 into two commits.
> > > * vhost-shadow-virtqueue.c
> > >
> > >   (vhost_svq_add_packed):
> > >   - Merge with "vhost_svq_vring_write_descs_packed()"
> > >   - Remove "num == 0" check
> > >
> > >  hw/virtio/vhost-shadow-virtqueue.c | 93 +++++++++++++++++++++++++++++-
> > >  1 file changed, 92 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c
> > > b/hw/virtio/vhost-shadow-virtqueue.c index fc5f408f77..c7b7e0c477 100644
> > > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > > @@ -217,6 +217,91 @@ static bool vhost_svq_add_split(VhostShadowVirtqueue 
> > > *svq,
> > >      return true;
> > >
> > >  }
> > >
> > > +static bool vhost_svq_add_packed(VhostShadowVirtqueue *svq,
> > > +                                const struct iovec *out_sg, size_t 
> > > out_num,
> > > +                                const struct iovec *in_sg, size_t in_num,
> > > +                                unsigned *head)
> > > +{
> > > +    bool ok;
> > > +    uint16_t head_flags = 0;
> > > +    g_autofree hwaddr *sgs = g_new(hwaddr, out_num + in_num);
> > > +
> > > +    *head = svq->vring_packed.next_avail_idx;
> > > +
> > > +    /* We need some descriptors here */
> > > +    if (unlikely(!out_num && !in_num)) {
> > > +        qemu_log_mask(LOG_GUEST_ERROR,
> > > +                      "Guest provided element with no descriptors");
> > > +        return false;
> > > +    }
> > > +
> > > +    uint16_t id, curr, i;
> > > +    unsigned n;
> > > +    struct vring_packed_desc *descs = svq->vring_packed.vring.desc;
> > > +
> > > +    i = *head;
> > > +    id = svq->free_head;
> > > +    curr = id;
> > > +
> > > +    size_t num = out_num + in_num;
> > > +
> > > +    ok = vhost_svq_translate_addr(svq, sgs, out_sg, out_num);
> > > +    if (unlikely(!ok)) {
> > > +        return false;
> > > +    }
> > > +
> > > +    ok = vhost_svq_translate_addr(svq, sgs + out_num, in_sg, in_num);
> > > +    if (unlikely(!ok)) {
> > > +        return false;
> > > +    }
> > > +
> >
> > (sorry I missed this from the RFC v1) I think all of the above should
> > be in the caller, isn't it? It is duplicated with split.
>
> I don't think this will be straightforward. While they perform the same 
> logical
> step in both cases, their implementation is a little different. For example, 
> the
> "sgs" pointer is created a little differently in both cases.

Do you mean because MAX() vs in_num+out_num? It is ok to convert both
to the latter.

> The parameters to
> "vhost_svq_translate_addr" is also a little different. I think if they are 
> moved to
> the caller, they will be in both "svq->is_packed" branches (in 
> "vhost_svq_add").
>

I don't see any difference apart from calling it with in and out sgs
separately or calling it for all of the array, am I missing something?

> > Also, declarations should be at the beginning of blocks per QEMU
> > coding style [1].
>
> Sorry, I missed this. I'll rectify this.
>

No worries!

You can run scripts/checkpatch.pl in QEMU for the next series, it
should catch many of these small issues.

Thanks!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]