qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] virtio-serial: fix segfault on disconnect


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH] virtio-serial: fix segfault on disconnect
Date: Fri, 2 Jun 2017 14:30:17 +0100

On Fri, Jun 2, 2017 at 11:13 AM, Pankaj Gupta <address@hidden> wrote:
>> Since commit d4c19cdeeb2f1e474bc426a6da261f1d7346eb5b ("virtio-serial:
>> add missing virtio_detach_element() call") the following commands may
>> cause QEMU to segfault:
>>
>>   $ qemu -M accel=kvm -cpu host -m 1G \
>>          -drive if=virtio,file=test.img,format=raw \
>>          -device virtio-serial-pci,id=virtio-serial0 \
>>          -chardev socket,id=channel1,path=/tmp/chardev.sock,server,nowait \
>>          -device
>>          virtserialport,chardev=channel1,bus=virtio-serial0.0,id=port1
>>   $ nc -U /tmp/chardev.sock
>>   ^C
>>
>>   (guest)$ cat /dev/zero >/dev/vport0p1
>>
>> The segfault is non-deterministic: if the event loop notices the socket
>> has been closed then there is no crash.  The disconnect has to happen
>> right before QEMU attempts to write data to the socket.
>>
>> The backtrace is as follows:
>>
>>   Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
>>   0x00005555557e0698 in do_flush_queued_data (port=0x5555582cedf0,
>>   vq=0x7fffcc854290, vdev=0x55555807b1d0) at hw/char/virtio-serial-bus.c:180
>>   180           for (i = port->iov_idx; i < port->elem->out_num; i++) {
>>   #1  0x000055555580d363 in virtio_queue_notify_vq (vq=0x7fffcc854290) at
>>   hw/virtio/virtio.c:1524
>>   #2  0x000055555580d363 in virtio_queue_host_notifier_read
>>   (n=0x7fffcc8542f8) at hw/virtio/virtio.c:2430
>>   #3  0x0000555555b3482c in aio_dispatch_handlers
>>   (address@hidden) at util/aio-posix.c:399
>>   #4  0x0000555555b350d8 in aio_dispatch (ctx=0x5555566b8c80) at
>>   util/aio-posix.c:430
>>   #5  0x0000555555b3212e in aio_ctx_dispatch (source=<optimized out>,
>>   callback=<optimized out>, user_data=<optimized out>) at util/async.c:261
>>   #6  0x00007fffde71de52 in g_main_context_dispatch () at
>>   /lib64/libglib-2.0.so.0
>>   #7  0x0000555555b34353 in glib_pollfds_poll () at util/main-loop.c:213
>>   #8  0x0000555555b34353 in os_host_main_loop_wait (timeout=<optimized out>)
>>   at util/main-loop.c:261
>>   #9  0x0000555555b34353 in main_loop_wait (nonblocking=<optimized out>) at
>>   util/main-loop.c:517
>>   #10 0x0000555555773207 in main_loop () at vl.c:1917
>>   #11 0x0000555555773207 in main (argc=<optimized out>, argv=<optimized out>,
>>   envp=<optimized out>) at vl.c:4751
>>
>> The do_flush_queued_data() function does not anticipate chardev close
>> events during vsc->have_data().  It expects port->elem to remain
>> non-NULL for the duration its for loop.
>
> Just thinking if there is still data to flush, should we close/free the port.
> Or it can get close automatically.
>
> Or I am missing anything here?

virtio_serial_close() was already called by
virtio-console.c:chr_event().  Both port->elem and the entire output
virtqueue have been discarded.  No further data will be transferred
and do_flush_queued_data() doesn't need to do anything.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]