qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 6/7] vhost-user: add multiple queue support


From: Yuanhan Liu
Subject: Re: [Qemu-devel] [PATCH 6/7] vhost-user: add multiple queue support
Date: Tue, 15 Sep 2015 10:15:13 +0800
User-agent: Mutt/1.5.21 (2010-09-15)

On Mon, Sep 14, 2015 at 06:00:41PM +0800, Jason Wang wrote:
> 
> 
> On 09/08/2015 03:38 PM, Yuanhan Liu wrote:
> > From: Ouyang Changchun <address@hidden>
> >
> > This patch is initially based a patch from Nikolay Nikolaev.
> >
> > Here is the latest version for adding vhost-user multiple queue support,
> > by creating a nc and vhost_net pair for each queue.
> >
> > What differs from last version is that this patch addresses two major
> > concerns from Michael, and fixes one hidden bug.
> >
> > - Concern #1: no feedback when the backend can't support # of
> >   requested queues(by providing queues=# option).
> >
> >   Here we address this issue by querying VHOST_USER_PROTOCOL_F_MQ
> >   protocol features first, if not set, it means the backend don't
> >   support mq feature, and let max_queues be 1. Otherwise, we send
> >   another message, VHOST_USER_GET_QUEUE_NUM, for getting the max_queues
> >   the backend supports.
> >
> >   At vhost-user initiation stage(net_vhost_user_start), we then initiate
> >   one queue first, which, in the meantime, also gets the max_queues.
> >   We then do a simple compare: if requested_queues > max_queues, we
> >   exit(I guess it's safe to exit here, as the vm is not running yet).
> >
> > - Concern #2: some messages are sent more times than necessary.
> >
> >   We came an agreement with Michael that we could categorize vhost
> >   user messages to 2 types: none-vring specific messages, which should
> >   be sent only once, and vring specific messages, which should be sent
> >   per queue.
> >
> >   Here I introduced a helper function vhost_user_one_time_request(),
> >   which lists following messages as none-vring specific messages:
> >
> >         VHOST_USER_GET_FEATURES
> >         VHOST_USER_SET_FEATURES
> >         VHOST_USER_GET_PROTOCOL_FEATURES
> >         VHOST_USER_SET_PROTOCOL_FEATURES
> >         VHOST_USER_SET_OWNER
> >         VHOST_USER_RESET_DEVICE
> >         VHOST_USER_SET_MEM_TABLE
> >         VHOST_USER_GET_QUEUE_NUM
> >
> >   For above messages, we simply ignore them when they are not sent the first
> >   time.
> >
> > I also observed a hidden bug from last version. We register the char dev
> > event handler N times, which is not necessary, as well as buggy: A later
> > register overwrites the former one, as qemu_chr_add_handlers() will not
> > chain those handlers, but instead overwrites the old one. So, in theory,
> > invoking qemu_chr_add_handlers N times will not end up with calling the
> > handler N times.
> >
> > However, the reason the handler is invoked N times is because we start
> > the backend(the connection server) first, and hence when 
> > net_vhost_user_init()
> > is executed, the connection is already established, and 
> > qemu_chr_add_handlers()
> > then invoke the handler immediately, which just looks like we invoke the
> > handler(net_vhost_user_event) directly from net_vhost_user_init().
> >
> > The solution I came up with is to make VhostUserState as an upper level
> > structure, making it includes N nc and vhost_net pairs:
> >
> >    struct VhostUserNetPeer {
> >        NetClientState *nc;
> >        VHostNetState  *vhost_net;
> >    };
> >
> >    typedef struct VhostUserState {
> >        CharDriverState *chr;
> >        bool running;
> >        int queues;
> >        struct VhostUserNetPeer peers[];
> >    } VhostUserState;
> 
> Haven't looked at previous versions but alternatively, can you do this by:
> 
> - keep current VhostUserState
> - allocate multiple VhostUserState
> - register the char handler only on VhostUserState[0]
> - enumerate all others in handler by qemu_find_net_clients_except() ?

That does sound work and much better! I will give it a try soon. Thanks!

> 
> >
> > Signed-off-by: Nikolay Nikolaev <address@hidden>
> > Signed-off-by: Changchun Ouyang <address@hidden>
> > Signed-off-by: Yuanhan Liu <address@hidden>
> > ---
...
> >      case VHOST_USER_GET_VRING_BASE:
> >          memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
> > +        msg.addr.index += dev->vq_index;
> 
> Let's don't do this silently here. Since we've done a minus in
> vhost_virtqueue_start(), I think we could introduce a new callback and
> do correct calculation there.

Good point, I will try it as well.

> > -    s->vhost_net = vhost_net_init(&options);
> > +    options.net_backend = s->peers[i].nc;
> > +    vhost_net = s->peers[i++].vhost_net = vhost_net_init(&options);
> > +
> > +    max_queues = vhost_net_get_max_queues(vhost_net);
> > +    if (s->queues >= max_queues) {
> > +        error_report("you are asking more queues than supported: %d\n",
> > +                     max_queues);
> > +        return -1;
> 
> Memory leak here?

Yeah, it is. But Does it matter? Since we will call exit() soon, which will
reclaim everything in the end. Or, even though, it's still needed?

Either way, I can do that.

> 
> > +    }
> > +
> > +    for (; i < s->queues; i++) {
> > +        options.net_backend = s->peers[i].nc;
> > +
> > +        s->peers[i].vhost_net = vhost_net_init(&options);
> > +        if (!s->peers[i].vhost_net)
> > +            return -1;
> > +    }
> 
> Maybe you can initialize all vhost_net in one loop, and check for the
> max_queues afterwards to simplify the code.

Okay.

> >
...
> >  static int net_vhost_user_init(NetClientState *peer, const char *device,
> > -                               const char *name, CharDriverState *chr)
> > +                               const char *name, VhostUserState *s)
> >  {
> >      NetClientState *nc;
> > -    VhostUserState *s;
> > +    CharDriverState *chr = s->chr;
> > +    int i;
> >  
> > -    nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
> > +    for (i = 0; i < s->queues; i++) {
> > +        nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
> >  
> > -    snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
> > -             chr->label);
> > +        snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s",
> > +                 i, chr->label);
> >  
> > -    s = DO_UPCAST(VhostUserState, nc, nc);
> > +        /* We don't provide a receive callback */
> > +        nc->receive_disabled = 1;
> >  
> > -    /* We don't provide a receive callback */
> > -    s->nc.receive_disabled = 1;
> > -    s->chr = chr;
> > -    nc->queue_index = 0;
> > +        nc->queue_index = i;
> > +        nc->opaque      = s;
> >  
> > -    qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
> > +        s->peers[i].nc = nc;
> 
> So actually "peers" is confusing. Better rename it to "ncs" or something
> else.

Yeah, that's why I renamed it to pairs in later version. But anyway, I guess
it's not needed anymore with qemu_find_net_clients_except(), right?  :)


Thanks!

        --yliu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]