qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-comment] Re: [PATCH] *** Vhost-pci RFC v2 ***


From: Marc-André Lureau
Subject: Re: [Qemu-devel] [virtio-comment] Re: [PATCH] *** Vhost-pci RFC v2 ***
Date: Thu, 01 Sep 2016 13:05:22 +0000

Hi

On Thu, Sep 1, 2016 at 4:13 PM Wei Wang <address@hidden> wrote:

> On 09/01/2016 04:49 PM, Marc-André Lureau wrote:
> > Hi
> >
> > On Thu, Sep 1, 2016 at 12:19 PM Wei Wang <address@hidden
> > <mailto:address@hidden>> wrote:
> >
> >     On 08/31/2016 08:30 PM, Marc-André Lureau wrote:
> >
> >>     - If it could be made not pci-specific, a better name for the
> >>     device could be simply "driver": the driver of a virtio device.
> >>     Or the "slave" in vhost-user terminology - consumer of virtq. I
> >>     think you prefer to call it "backend" in general, but I find it
> >>     more confusing.
> >
> >     Not really. A virtio device has it own driver (e.g. a virtio-net
> >     driver for a virtio-net device). A vhost-pci device plays the role
> >     of a backend (just like vhost_net, vhost_user) for a virtio
> >     device. If we use the "device/driver" naming convention, the
> >     vhost-pci device is part of the "device". But I actually prefer to
> >     use "frontend/backend" :) If we check the QEMU's
> >     doc/specs/vhost-user.txt, it also uses "backend" to describe.
> >
> >
> > Yes, but it uses "backend" freely without any definition and to name
> > eventually different things. (at least "slave" is being defined as the
> > consumer of virtq, but I think some people don't like to use that word).
> >
>
> I think most people know the concept of backend/frontend, that's
> probably the reason why they usually don't explicitly explain it in a
> doc. If you guys don't have an objection, I suggest to use it in the
> discussion :)  The goal here is to get the design finalized first. When
> it comes to the final spec wording phase, we can decide which
> description is more proper.
>

"backend" is too broad for me. Instead I would stick to something closer to
what we want to name and define. If it's the consumer of virtq, then why
not call it that way.


> > Have you thought about making the device not pci specific? I don't
> > know much about mmio devices nor s/390, but if devices can hotplug
> > their own memory (I believe mmio can), then it should be possible to
> > define a device generic enough.
>
> Not yet. I think the main difference would be the way to map the
> frontend VM's memory (in our case, we use a BAR). Other things should be
> generic.
>

I hope some more knowledgeable people will chime in.


>
> >
> >>     - Why is it required or beneficial to support multiple "frontend"
> >>     devices over the same "vhost-pci" device? It could simplify
> >>     things if it was a single device. If necessary, that could also
> >>     be interesting as a vhost-user extension.
> >
> >     We call it "multiple backend functionalities" (e.g. vhost-pci-net,
> >     vhost-pci-scsi..). A vhost-pci driver contains multiple such
> >     backend functionalities, because in this way they can reuse
> >     (share) the same memory mapping. To be more precise, a vhost-pci
> >     device supplies the memory of a frontend VM, and all the backend
> >     functionalities need to access the same frontend VM memory, so we
> >     consolidate them into one vhost-pci driver to use one vhost-pci
> >     device.
> >
> >
> > That's what I imagined. Do you have a use case for that?
>
> Currently, we only have the network use cases. I think we can design it
> that way (multple backend functionalities), which is more generic (not
> just limited to network usages). When implementing it, we can first have
> the network backend functionality (i.e. vhost-pci-net) implemented. In
> the future, if people are interested in other backend functionalities, I
> think it should be easy to add them.
>

My question is not about the support of various kind of devices (that is
clearly a worthy goal to me) but to have support simultaneously of several
frontend/provider devices on the same vhost-pci device: is this required or
necessary? I think it would simplify things if it was 1-1 instead, I would
like to understand why you propose a different design.


>
> >
> > Given that it's in a VM (no caching issues?), how is it a problem to
> > map the same memory multiple times? Is there a memory limit?
> >
>
> I need to explain this a little bit more :)  - the backend VM doesn't
> need to map the same memory multiple times. It maps the entire memory of
> a frontend VM using a vhost-pci device (it's a one-time mapping
> happening at the setup phase). Those backend functionalities reside in
> the same vhost-pci driver, so the bar is ioremap()-ed only once, by the
> vhost-pci driver. The backend functionalities are not created together
> in the driver probe() function. A backend functionality is created when
> the vhost-pci driver receives a controlq message asking for creating one
> (the message indicates the type - net, scsi, console etc.).
>
> I haven't seen any caching issues so far.
>
> IIRC, the memory mapping has a limit (512GB or 1T), but that should be
> enough (a guest usually has a much smaller memory size).
>
> >>     - no interrupt support, I suppose you mainly looked at poll-based
> >>     net devices
> >
> >     Yes. But I think it's also possible to add the interrupt support.
> >     For example, we can use ioeventfd (or hypercall) to inject
> >     interrupts to the fontend VM after transmitting packets.
> >
> > I guess it would be a good idea to have this in the spec from the
> > beginning, not as an afterthought
>
> OK, will add it.
>
>
thanks
-- 
Marc-André Lureau


reply via email to

[Prev in Thread] Current Thread [Next in Thread]