qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3] vnc: fix segfault in closed connection handl


From: Klim Kireev
Subject: Re: [Qemu-devel] [PATCH v3] vnc: fix segfault in closed connection handling
Date: Wed, 14 Feb 2018 17:43:19 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0

ping

On 02/07/2018 12:48 PM, Klim Kireev wrote:
On one of our client's node, due to trying to read from closed ioc,
a segmentation fault occured. Corresponding backtrace:

0  object_get_class (address@hidden)
1  qio_channel_readv_full (ioc=0x0, iov=0x7ffe55277180 ...
2  qio_channel_read (ioc=<optimized out> ...
3  vnc_client_read_buf (address@hidden, ...
4  vnc_client_read_plain (vs=0x55625f3c6000)
5  vnc_client_read (vs=0x55625f3c6000)
6  vnc_client_io (ioc=<optimized out>, condition=G_IO_IN, ...
7  g_main_dispatch (context=0x556251568a50)
8  g_main_context_dispatch (address@hidden)
9  glib_pollfds_poll ()
10 os_host_main_loop_wait (timeout=<optimized out>)
11 main_loop_wait (address@hidden)
12 main_loop () at vl.c:1909
13 main (argc=<optimized out>, argv=<optimized out>, ...

Having analyzed the coredump, I understood that the reason is that
ioc_tag is reset on vnc_disconnect_start and ioc is cleaned
in vnc_disconnect_finish. Between these two events due to some
reasons the ioc_tag was set again and after vnc_disconnect_finish
the handler is running with freed ioc,
which led to the segmentation fault.

The patch checks vs->disconnecting in places where we call
qio_channel_add_watch and resets handler if disconnecting == TRUE
to prevent such an occurrence.

Signed-off-by: Klim Kireev <address@hidden>
---
Changelog:
v2: Attach the backtrace

v3: Change checks

  ui/vnc-jobs.c |  6 ++++--
  ui/vnc.c      | 15 ++++++++++++++-
  2 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/ui/vnc-jobs.c b/ui/vnc-jobs.c
index e326679dd0..868dddef4b 100644
--- a/ui/vnc-jobs.c
+++ b/ui/vnc-jobs.c
@@ -148,8 +148,10 @@ void vnc_jobs_consume_buffer(VncState *vs)
              if (vs->ioc_tag) {
                  g_source_remove(vs->ioc_tag);
              }
-            vs->ioc_tag = qio_channel_add_watch(
-                vs->ioc, G_IO_IN | G_IO_OUT, vnc_client_io, vs, NULL);
+            if (vs->disconnecting == FALSE) {
+                vs->ioc_tag = qio_channel_add_watch(
+                    vs->ioc, G_IO_IN | G_IO_OUT, vnc_client_io, vs, NULL);
+            }
          }
          buffer_move(&vs->output, &vs->jobs_buffer);
diff --git a/ui/vnc.c b/ui/vnc.c
index 93731accb6..67ccc8160f 100644
--- a/ui/vnc.c
+++ b/ui/vnc.c
@@ -1536,12 +1536,19 @@ gboolean vnc_client_io(QIOChannel *ioc G_GNUC_UNUSED,
      VncState *vs = opaque;
      if (condition & G_IO_IN) {
          if (vnc_client_read(vs) < 0) {
-            return TRUE;
+            goto end;
          }
      }
      if (condition & G_IO_OUT) {
          vnc_client_write(vs);
      }
+end:
+    if (vs->disconnecting) {
+        if (vs->ioc_tag != 0) {
+            g_source_remove(vs->ioc_tag);
+        }
+        vs->ioc_tag = 0;
+    }
      return TRUE;
  }
@@ -1630,6 +1637,12 @@ void vnc_flush(VncState *vs)
      if (vs->ioc != NULL && vs->output.offset) {
          vnc_client_write_locked(vs);
      }
+    if (vs->disconnecting) {
+        if (vs->ioc_tag != 0) {
+            g_source_remove(vs->ioc_tag);
+        }
+        vs->ioc_tag = 0;
+    }
      vnc_unlock_output(vs);
  }





reply via email to

[Prev in Thread] Current Thread [Next in Thread]