qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [RFC][PATCH v5 09/21] virtagent: add va.getdmesg RP


From: Michael Roth
Subject: Re: [Qemu-devel] Re: [RFC][PATCH v5 09/21] virtagent: add va.getdmesg RPC
Date: Tue, 07 Dec 2010 11:32:04 -0600
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.2.12) Gecko/20101027 Thunderbird/3.1.6

On 12/07/2010 08:37 AM, Jes Sorensen wrote:
On 12/03/10 19:03, Michael Roth wrote:
Add RPC to view guest dmesg output.

Signed-off-by: Michael Roth<address@hidden>
---
  virtagent-server.c |   46 ++++++++++++++++++++++++++++++++++++++++++++++
  1 files changed, 46 insertions(+), 0 deletions(-)

diff --git a/virtagent-server.c b/virtagent-server.c
index a430b58..aac8f70 100644
--- a/virtagent-server.c
+++ b/virtagent-server.c
@@ -83,6 +83,50 @@ EXIT_CLOSE_BAD:
      return result;
  }

+/* va_getdmesg(): return dmesg output
+ * rpc return values:
+ *   - dmesg output as a string
+ */
+static xmlrpc_value *va_getdmesg(xmlrpc_env *env,
+                              xmlrpc_value *param,
+                              void *user_data)
+{
+    char *dmesg_buf = NULL, cmd[256];
+    int ret;
+    xmlrpc_value *result = NULL;
+    FILE *pipe;
+
+    SLOG("va_getdmesg()");
+
+    dmesg_buf = qemu_mallocz(VA_DMESG_LEN + 2048);
+    sprintf(cmd, "dmesg -s %d", VA_DMESG_LEN);

What happens if the guest's dmesg buffer is larger than your hardcoded
value?

It'll end up getting truncated by the fread() later:

ret = fread(dmesg_buf, sizeof(char), VA_DMESG_LEN, pipe);

That's where the dmesg -s VA_DMESG_LEN comes into play, it should size things such that we can buffer up till the end of the dmesg output.

This param is kind of quirky though, size doesn't seem to have an affect for anything below 4KB, but if we stick with VA_DMESG_LEN >= 4KB this should cover us, unless it's a distro-specific. But it should blow anything up, at least.


Jes






reply via email to

[Prev in Thread] Current Thread [Next in Thread]