qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] net: Flush queued packets when guest resumes


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH v2] net: Flush queued packets when guest resumes
Date: Tue, 7 Jul 2015 12:19:27 +0300

On Tue, Jul 07, 2015 at 05:09:09PM +0800, Fam Zheng wrote:
> On Tue, 07/07 11:13, Michael S. Tsirkin wrote:
> > On Tue, Jul 07, 2015 at 09:21:07AM +0800, Fam Zheng wrote:
> > > Since commit 6e99c63 "net/socket: Drop net_socket_can_send" and friends,
> > > net queues need to be explicitly flushed after qemu_can_send_packet()
> > > returns false, because the netdev side will disable the polling of fd.
> > > 
> > > This fixes the case of "cont" after "stop" (or migration).
> > > 
> > > Signed-off-by: Fam Zheng <address@hidden>
> > 
> > Note virtio has its own handler which must be used to
> > flush packets - this one might run too early or too late.
> 
> Which handler do you mean? I don't think virtio-net handles resume now. (If it
> does, we probably should drop it together with this change, since it's needed
> by as all NICs.)
> 
> Fam

virtio_vmstate_change

It's all far from trivial. I suspect these whack-a-mole approach
spreading purge here and there will only create more bugs.

Why would we ever need to process network packets when
VM is not running? I don't see any point to it.
How about we simply stop the job processing network on
vm stop and restart on vm start?



> > 
> > > ---
> > > 
> > > v2: Unify with VM stop handler. (Stefan)
> > > ---
> > >  net/net.c | 19 ++++++++++++-------
> > >  1 file changed, 12 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/net/net.c b/net/net.c
> > > index 6ff7fec..28a5597 100644
> > > --- a/net/net.c
> > > +++ b/net/net.c
> > > @@ -1257,14 +1257,19 @@ void qmp_set_link(const char *name, bool up, 
> > > Error **errp)
> > >  static void net_vm_change_state_handler(void *opaque, int running,
> > >                                          RunState state)
> > >  {
> > > -    /* Complete all queued packets, to guarantee we don't modify
> > > -     * state later when VM is not running.
> > > -     */
> > > -    if (!running) {
> > > -        NetClientState *nc;
> > > -        NetClientState *tmp;
> > > +    NetClientState *nc;
> > > +    NetClientState *tmp;
> > >  
> > > -        QTAILQ_FOREACH_SAFE(nc, &net_clients, next, tmp) {
> > > +    QTAILQ_FOREACH_SAFE(nc, &net_clients, next, tmp) {
> > > +        if (running) {
> > > +            /* Flush queued packets and wake up backends. */
> > > +            if (nc->peer && qemu_can_send_packet(nc)) {
> > > +                qemu_flush_queued_packets(nc->peer);
> > > +            }
> > > +        } else {
> > > +            /* Complete all queued packets, to guarantee we don't modify
> > > +             * state later when VM is not running.
> > > +             */
> > >              qemu_flush_or_purge_queued_packets(nc, true);
> > >          }
> > >      }
> > > -- 
> > > 2.4.3



reply via email to

[Prev in Thread] Current Thread [Next in Thread]