qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] 100% CPU when sockfd is half-closed and unexpected beha


From: Liu Yuan
Subject: Re: [Qemu-devel] 100% CPU when sockfd is half-closed and unexpected behavior for qemu_co_send()
Date: Mon, 14 Jan 2013 17:29:01 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130106 Thunderbird/17.0.2

On 01/14/2013 05:09 PM, Paolo Bonzini wrote:
>> Another unexpected behavior is that qemu_co_send() will send data
>> > successfully for the half-closed situation, even the other end is
>> > completely down. I think the *expected* behavior is that we get notified
>> > by a HUP and close the affected sockfd, then qemu_co_send() will not
>> > send any data, then the caller of qemu_co_send() can handle error case.
> qemu_co_send() should get an EPIPE or similar error.  The first time it
> will report a partial send, the second time it will report the error
> directly to the caller.
> 
> Please check if this isn't a bug in the Sheepdog driver.

I don't think so. I use netstat to assure that the connection is in closed_wait 
state and I added a printf in the qemu_co_send() and it indeed sent 
successfully, this can be backed by the Linux kernel source code:

static ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset,
                                size_t size, int flags)
{
         ....
        /* Wait for a connection to finish. One exception is TCP Fast Open
         * (passive side) where data is allowed to be sent before a connection
         * is fully established.
         */
        if (((1 << sk->sk_state) & ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) &&
            !tcp_passive_fastopen(sk)) {
                if ((err = sk_stream_wait_connect(sk, &timeo)) != 0)
                        goto out_err;
        }
        ....
}

which will put data in the sock buf and returns successful in a CLOSED_WAIT 
state. I don't see means in Sheepdog driver code to get a HUP notification for 
a actual cut off connection.

Thanks,
Yuan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]