qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] QEMU patch to allow VM introspection via libvmi


From: Valerio Aimale
Subject: Re: [Qemu-devel] QEMU patch to allow VM introspection via libvmi
Date: Fri, 16 Oct 2015 08:30:47 -0600
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Thunderbird/38.3.0

On 10/16/15 2:15 AM, Markus Armbruster wrote:
address@hidden writes:

All-

I've produced a patch for the current QEMU HEAD, for libvmi to
introspect QEMU/KVM VMs.

Libvmi has patches for the old qeum-kvm fork, inside its source tree:
https://github.com/libvmi/libvmi/tree/master/tools/qemu-kvm-patch

This patch adds a hmp and a qmp command, "pmemaccess". When the
commands is invoked with a string arguments (a filename), it will open
a UNIX socket and spawn a listening thread.

The client writes binary commands to the socket, in the form of a c
structure:

struct request {
      uint8_t type;   // 0 quit, 1 read, 2 write, ... rest reserved
      uint64_t address;   // address to read from OR write to
      uint64_t length;    // number of bytes to read OR write
};

The client receives as a response, either (length+1) bytes, if it is a
read operation, or 1 byte ifit is a write operation.

The last bytes of a read operation response indicates success (1
success, 0 failure). The single byte returned for a write operation
indicates same (1 success, 0 failure).
So, if you ask to read 1 MiB, and it fails, you get back 1 MiB of
garbage followed by the "it failed" byte?
Markus, that appear to be the case. However, I did not write the communication protocol between libvmi and qemu. I'm assuming that the person that wrote the protocol, did not want to bother with over complicating things.

https://github.com/libvmi/libvmi/blob/master/libvmi/driver/kvm/kvm.c

I'm thinking he assumed reads would be small in size and the price of reading garbage was less than the price of writing a more complicated protocol. I can see his point, confronted with the same problem, I might have done the same.

The socket API was written by the libvmi author and it works the with
current libvmi version. The libvmi client-side implementation is at:

https://github.com/libvmi/libvmi/blob/master/libvmi/driver/kvm/kvm.c

As many use kvm VM's for introspection, malware and security analysis,
it might be worth thinking about making the pmemaccess a permanent
hmp/qmp command, as opposed to having to produce a patch at each QEMU
point release.
Related existing commands: memsave, pmemsave, dump-guest-memory.

Can you explain why these won't do for your use case?
For people who do security analysis there are two use cases, static and dynamic analysis. With memsave, pmemsave and dum-guest-memory one can do static analysis. I.e. snapshotting a VM and see what was happening at that point in time.
Dynamic analysis require to be able to 'introspect' a VM while it's running.

If you take a snapshot of two people exchanging a glass of water, and you happen to take it at the very moment both persons have their hands on the glass, it's hard to tell who passed the glass to whom. If you have a movie of the same scene, it's obvious who's the giver and who's the receiver. Same use case.

More to the point, there's a host of C and python frameworks to dynamically analyze VMs: volatility, rekal, "drakvuf", etc. They all build on top of libvmi. I did not want to reinvent the wheel.

Mind you, 99.9% of people that do dynamic VM analysis use xen. They contend that xen has better introspection support. In my case, I did not want to bother with dedicating a full server to be a xen domain 0. I just wanted to do a quick test by standing up a QEMU/kvm VM, in an otherwise purposed server.



Also, the pmemsave commands QAPI should be changed to be usable with
64bit VM's

in qapi-schema.json

from

---
{ 'command': 'pmemsave',
   'data': {'val': 'int', 'size': 'int', 'filename': 'str'} }
---

to

---
{ 'command': 'pmemsave',
   'data': {'val': 'int64', 'size': 'int64', 'filename': 'str'} }
---
In the QAPI schema, 'int' is actually an alias for 'int64'.  Yes, that's
confusing.
I think it's confusing for the HMP parser too. If you have a VM with 8Gb of RAM and want to snapshot the whole physical memory, via HMP over telnet this is what happens:

$ telnet localhost 1234
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
QEMU 2.4.0.1 monitor - type 'help' for more information
(qemu) help pmemsave
pmemsave addr size file -- save to disk physical memory dump starting at 'addr' of size 'size'
(qemu) pmemsave 0 8589934591 "/tmp/memorydump"
'pmemsave' has failed: integer is for 32-bit values
Try "help pmemsave" for more information
(qemu) quit

With the changes I suggested, the command succeeds

$ telnet localhost 1234
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
QEMU 2.4.0.1 monitor - type 'help' for more information
(qemu) help pmemsave
pmemsave addr size file -- save to disk physical memory dump starting at 'addr' of size 'size'
(qemu) pmemsave 0 8589934591 "/tmp/memorydump"
(qemu) quit

However I just noticed that the dump is just about 4GB in size, so there might be more changes needed to snapshot all physical memory of a 64 but VM. I did not investigate any further.

ls -l /tmp/memorydump
-rw-rw-r-- 1 libvirt-qemu kvm 4294967295 Oct 16 08:04 /tmp/memorydump

hmp-commands.hx and qmp-commands.hx should be edited accordingly. I
did not make the above pmemsave changes part of my patch.

Let me know if you have any questions,

Valerio




reply via email to

[Prev in Thread] Current Thread [Next in Thread]