qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for 8.0 v8 06/12] vdpa: extract vhost_vdpa_svq_allocate_iova_


From: Jason Wang
Subject: Re: [PATCH for 8.0 v8 06/12] vdpa: extract vhost_vdpa_svq_allocate_iova_tree
Date: Mon, 5 Dec 2022 12:24:13 +0800

On Thu, Dec 1, 2022 at 5:50 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Thu, Dec 1, 2022 at 9:45 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Wed, Nov 30, 2022 at 3:40 PM Eugenio Perez Martin
> > <eperezma@redhat.com> wrote:
> > >
> > > On Wed, Nov 30, 2022 at 7:43 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > > On Thu, Nov 24, 2022 at 11:52 PM Eugenio Pérez <eperezma@redhat.com> 
> > > > wrote:
> > > > >
> > > > > It can be allocated either if all virtqueues must be shadowed or if
> > > > > vdpa-net detects it can shadow only cvq.
> > > > >
> > > > > Extract in its own function so we can reuse it.
> > > > >
> > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > > ---
> > > > >  net/vhost-vdpa.c | 29 +++++++++++++++++------------
> > > > >  1 file changed, 17 insertions(+), 12 deletions(-)
> > > > >
> > > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > > index 88e0eec5fa..9ee3bc4cd3 100644
> > > > > --- a/net/vhost-vdpa.c
> > > > > +++ b/net/vhost-vdpa.c
> > > > > @@ -240,6 +240,22 @@ static NetClientInfo net_vhost_vdpa_info = {
> > > > >          .check_peer_type = vhost_vdpa_check_peer_type,
> > > > >  };
> > > > >
> > > > > +static int vhost_vdpa_get_iova_range(int fd,
> > > > > +                                     struct vhost_vdpa_iova_range 
> > > > > *iova_range)
> > > > > +{
> > > > > +    int ret = ioctl(fd, VHOST_VDPA_GET_IOVA_RANGE, iova_range);
> > > > > +
> > > > > +    return ret < 0 ? -errno : 0;
> > > > > +}
> > > >
> > > > I don't get why this needs to be moved to net specific code.
> > > >
> > >
> > > It was already in net, this code just extracted it in its own function.
> >
> > Ok, there's similar function that in vhost-vdpa.c:
> >
> > static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> > {
> >     int ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE,
> >                               &v->iova_range);
> >     if (ret != 0) {
> >         v->iova_range.first = 0;
> >         v->iova_range.last = UINT64_MAX;
> >     }
> >
> >     trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> >                                     v->iova_range.last);
> > }
> >
> > I think we can reuse that.
> >
>
> That's right, but I'd do the reverse: I would store iova_min, iova_max
> in VhostVDPAState and would set it to vhost_vdpa at
> net_vhost_vdpa_init. That way, we only have one ioctl call at the
> beginning instead of having (#vq pairs + cvq) calls each time the
> device starts. I can send it in a new change if you see it ok.
>
> There are a few functions like that we can reuse in net/. To get the
> features and the backend features are two other examples. Even if we
> don't cache them since device initialization mandates the read, we
> could reduce code duplication that way.
>
> However, they use vhost_dev or vhost_vdpa instead of directly the file
> descriptor. Not a big deal but it's an extra step.
>
> What do you think?

I'm fine with this.

Thanks

>
> Thanks!
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]