qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] vnc: fix segfault in closed connection handl


From: klim
Subject: Re: [Qemu-devel] [PATCH v2] vnc: fix segfault in closed connection handling
Date: Wed, 7 Feb 2018 11:42:16 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0

On 01/31/2018 07:44 PM, Daniel P. Berrangé wrote:
On Wed, Jan 31, 2018 at 07:25:21PM +0300, Klim Kireev wrote:
On one of our client's node, due to trying to read from closed ioc,
a segmentation fault occured. Corresponding backtrace:

0  object_get_class (address@hidden)
1  qio_channel_readv_full (ioc=0x0, iov=0x7ffe55277180 ...
2  qio_channel_read (ioc=<optimized out> ...
3  vnc_client_read_buf (address@hidden, ...
4  vnc_client_read_plain (vs=0x55625f3c6000)
5  vnc_client_read (vs=0x55625f3c6000)
6  vnc_client_io (ioc=<optimized out>, condition=G_IO_IN, ...
7  g_main_dispatch (context=0x556251568a50)
8  g_main_context_dispatch (address@hidden)
9  glib_pollfds_poll ()
10 os_host_main_loop_wait (timeout=<optimized out>)
11 main_loop_wait (address@hidden)
12 main_loop () at vl.c:1909
13 main (argc=<optimized out>, argv=<optimized out>, ...

Having analyzed the coredump, I understood that the reason is that
ioc_tag is reset on vnc_disconnect_start and ioc is cleaned
in vnc_disconnect_finish. Between these two events due to some
reasons the ioc_tag was set again and after vnc_disconnect_finish
the handler is running with freed ioc,
which led to the segmentation fault.

The patch checks vs->disconnecting in places where we call
qio_channel_add_watch to prevent such an occurrence.

Signed-off-by: Klim Kireev <address@hidden>
---
Changelog:
v2: Attach the backtrace

v3: Change checks

  ui/vnc.c | 18 ++++++++++++++----
  1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/ui/vnc.c b/ui/vnc.c
index 33b087221f..708204fa7e 100644
--- a/ui/vnc.c
+++ b/ui/vnc.c
@@ -1407,13 +1407,19 @@ static void vnc_client_write_locked(VncState *vs)
      } else
  #endif /* CONFIG_VNC_SASL */
      {
-        vnc_client_write_plain(vs);
+        if (vs->disconnecting == FALSE) {
+            vnc_client_write_plain(vs);
+        } else {
+            if (vs->ioc_tag != 0) {
+                g_source_remove(vs->ioc_tag);
+                vs->ioc_tag = 0;
+            }
+        }
      }
  }
I'm not sure it is safe to only do the check in the else{} branch
of this code. If this code is reachable, then I think the SASL
branch will cause the same crash problems too.  I think probably
need to push the checks up a level or two in the caller stack...

static void vnc_client_write(VncState *vs)
  {
-
      vnc_lock_output(vs);
      if (vs->output.offset) {
          vnc_client_write_locked(vs);
@@ -1421,8 +1427,12 @@ static void vnc_client_write(VncState *vs)
          if (vs->ioc_tag) {
              g_source_remove(vs->ioc_tag);
          }
-        vs->ioc_tag = qio_channel_add_watch(
-            vs->ioc, G_IO_IN, vnc_client_io, vs, NULL);
+        if (vs->disconnecting == FALSE) {
+            vs->ioc_tag = qio_channel_add_watch(
+                vs->ioc, G_IO_IN, vnc_client_io, vs, NULL);
+        } else {
+            vs->ioc_tag = 0;
+        }
      }
      vnc_unlock_output(vs);
  }
...I think perhaps we should do the check in the vnc_client_io()
method, and also in the vnc_flush() method.

I think we also need to put a check in the vnc_jobs_consume_buffer()
method, which can be triggered from a bottom-half.

Thank you for your advice,
in both places there is a check vs->ioc != NULL, so we don't need check it there


Regards,
Daniel





reply via email to

[Prev in Thread] Current Thread [Next in Thread]