qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/9] introduce virtio net dataplane


From: liu ping fan
Subject: Re: [Qemu-devel] [PATCH 0/9] introduce virtio net dataplane
Date: Wed, 27 Feb 2013 17:37:05 +0800

On Tue, Feb 26, 2013 at 1:53 AM, Paolo Bonzini <address@hidden> wrote:
> Il 25/02/2013 18:35, mdroth ha scritto:
>>> > Moving more of the os_host_main_loop_wait to AioContext would be
>>> > possible (timers are on the todo list, in fact), but we should only do
>>> > it as need arises.
>> Were you planning on hanging another GSource off of AioContext to handle
>> timers?
>
> No, I'm planning to merge qemu-timer.c into it.  I don't want to turn
> AioContext into a bad copy of the glib main loop.  AioContext should
> keep the concepts of the QEMU block layer.
>
>> We could consolidate qemu_set_fd_handler()/qemu_aio_set_fd_handler() on
>> POSIX by teaching the current GSource about fd_read_poll functions, and on
>> Windows qemu_set_fd_handler() would tie into a winsock-specific GSource
>> that we register with an AioContext. Might be able to do similar with
>> GSources for slirp and the qemu_add_wait_object() stuff.
>
> Consolidating qemu_set_fd_handler and slirp to GSources is a good thing
> to do, indeed.  The assumption here is that qemu_set_fd_handler and
> slirp do not need special locking or priority constructs, while
> everything else can use AioContext.
>
> AioContext could also grow other implementations that use
> epoll/kqueue/whatever.
>
>> Yup, don't mean to get ahead of things, my main interest is just in how
>> we might deal with the interaction between NetClients and virtio-net
>> dataplane threads without introducing ad-hoc, dataplane-specific
>> mechanisms. If there was a general way for Nic to tell it's NetClient
>> peer "hey, i have my own thread/main loop, here's my {Aio,*}Context, register
>> your handlers there instead" I think this series might look a lot more
>> realistic as a default, or at least make merging it less risky.
>
> Yes, I see the point.
>
> The main blocker to this series seems to be hubs, because they interact
> with multiple NetClients and thus could span multiple AioContexts.
> Adding proper locking there is going to be interesting. :)
>
I think we can attach hub ports with different AioContext, and
dispatch them on corresponding one.
What about
net_hub_receive()
{
   for_each_hub_port
        if (dstport->aio == srcport->aio)
            qemu_send_packet()
        else
            qemu_net_queue_append(dstport)
            signal dstport's eventfd, whose handler will call  nc->receive()
}

Regards,
Pingfan
> But otherwise, I don't think we would have many hacks, if any.  Unlike
> the block layer, networking is quite self-contained and there isn't much
> magic to interact with it from the monitor; and for the block layer we
> already have an idea of how to deal with concurrency.
>
> Paolo
>
>> But the
>> right way to do that seems to tie into the discussion around making
>> other aio sources more GMainContext/AioContext-ish.
>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]