qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] QEMU patch to allow VM introspection via libvmi


From: Markus Armbruster
Subject: Re: [Qemu-devel] QEMU patch to allow VM introspection via libvmi
Date: Mon, 26 Oct 2015 10:09:27 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux)

Valerio Aimale <address@hidden> writes:

> On 10/23/15 12:55 PM, Eduardo Habkost wrote:
>> On Thu, Oct 22, 2015 at 03:51:28PM -0600, Valerio Aimale wrote:
>>> On 10/22/15 3:47 PM, Eduardo Habkost wrote:
>>>> On Thu, Oct 22, 2015 at 01:57:13PM -0600, Valerio Aimale wrote:
>>>>> On 10/22/15 1:12 PM, Eduardo Habkost wrote:
>>>>>> On Wed, Oct 21, 2015 at 12:54:23PM +0200, Markus Armbruster wrote:
>>>>>>> Valerio Aimale <address@hidden> writes:
>>>>>> [...]
>>>>>>>> There's also a similar patch, floating around the internet, the uses
>>>>>>>> shared memory, instead of sockets, as inter-process communication
>>>>>>>> between libvmi and QEMU. I've never used that.
>>>>>>> By the time you built a working IPC mechanism on top of shared memory,
>>>>>>> you're often no better off than with AF_LOCAL sockets.
>>>>>>>
>>>>>>> Crazy idea: can we allocate guest memory in a way that support sharing
>>>>>>> it with another process?  Eduardo, can -mem-path do such wild things?
>>>>>> It can't today, but just because it creates a temporary file inside
>>>>>> mem-path and unlinks it immediately after opening a file descriptor. We
>>>>>> could make memory-backend-file also accept a full filename as argument,
>>>>>> or add a mechanism to let QEMU send the open file descriptor to a QMP
>>>>>> client.
>>>>>>
>>>>> Eduardo, would my "artisanal" idea of creating an mmap'ed image
>>>>> of the guest
>>>>> memory footprint work, augmented by Eric's suggestion of having the qmp
>>>>> client pass the filename?
>>>> The code below doesn't make sense to me.
>>> Ok. What I am trying to do is to create a mmapped() memory area of the guest
>>> physical memory that can be shared between QEMU and an external process,
>>> such that the external process can read arbitrary locations of the qemu
>>> guest physical memory.
>>> In short, I'm using mmap MAP_SHARED to share the guest memory area with a
>>> process that is external to QEMU
>>>
>>> does it make better sense now?
>> I think you are confused about what mmap() does. It will create a new
>> mapping into the process address space, containing the data from an
>> existing file, not the other way around.
>>
> Eduardo, I think it would be a common rule of politeness not to pass
> any judgement on a person that you don't know, but for some texts in a
> mailing list. I think I understand how mmap() works, and very well.
>
> Participating is this discussion has been a struggle for me. For the
> good of the libvmi users, I have been trying to ignore the judgements,
> the comments and so on. But, alas, I throw my hands up in the air, and
> I surrender.

I'm sorry we exceeded your tolerance for frustration.  This mailing list
can be tough.  We try to be welcoming (believe it or not), but we too
often fail (okay, that part is easily believable).

To be honest, I had difficulties understanding your explanation, and
ended up guessing.  I figure Eduardo did the same, and guessed
incorrectly.  There but for the grace of God go I.

> I think libvmi can live, as it has for the past years, by patching the
> QEMU source tree on as needed basis, and keeping the patch in the
> libvmi source tree, without disturbing any further the QEMU community.

I'm sure libvmi can continue to require a patched QEMU, but I'm equally
sure getting its needs satisfied out of the box would be better for all.
To get that done, we need to understand the problem, and map the
solution space.

So let me try to summarize the thread, and what I've learned from it so
far.  Valerio, if you could correct misunderstandings, I'd be much
obliged.

LibVMI is a C library with Python bindings that makes it easy to monitor
the low-level details of a running virtual machine by viewing its
memory, trapping on hardware events, and accessing the vCPU registers.
This is called virtual machine introspection.
[Direct quote from http://libvmi.com/]

For that purpose, LibVMI needs (among other things) sufficiently fast
means to read and write guest memory.

Its existing solution is a non-upstream patch that adds a new, simple
protocol for reading and writing physical guest memory over TCP, and
monitor commands to start this service.

This thread is in reply to Valerio's attempt to upstream this patch.
Good move.

The usual questions for feature requests apply:

1. Is this a use case we want to serve?

   Unreserved yes.  Supporting virtual machine introspection with LibVMI
   makes sense.

2. Can it be served by existing guest introspection interfaces?

   Not quite clear, yet.

   LibVMI interacts with QEMU/KVM virtual machines via libvirt.  We want
   to be able to start and stop introspecting running virtual machines
   managed by libvirt.  Rules out solutions that require QEMU to be
   started with special command line options.  We really want QMP
   commands.  HMP commands would work, but HMP is not a stable
   interface.  It's fine for prototyping, of course.

   Interfaces discussed include the following monitor commands:

   * x, xp

     There's overhead for encoding / decoding.  LibVMI developers
     apparently have found it too slow.  They measured 90ms, which very
     slow indeed.  Turns out LibVMI goes through virsh every time!  I
     suspect the overhead starting virsh dwarves everything else several
     times over.  Therefore, we don't have an idea on this method's real
     overhead, yet.

     Transferring large blocks through the monitor connection can be
     problematic.  The monitor is control plane, bulk data should go
     through a data plane.  Not sure whether this is an issue for
     LibVMI's usage.

     x and xp are only in HMP.

     They cover only reading.

   * memsave, pmemsave

     Similar to x, xp, except here we use the file system as data plane.
     Additionally, we trade encoding / decoding overhead for temporary
     file handling overhead.

     Cover only reading.

   * dump-guest-memory

     More powerful (and more complex) than memsave, pmemsave, but geared
     towards a different use case: taking a crash dump of a running VM
     for static analysis.

     Covers only reading.

     Dumps in ELF or compressed kdump format.

     Supports dump to file descriptor, which could perhaps be used to
     avoid temporary files.

   * gdbserver

     This starts a server for the GDB Remote Serial Protocol on a TCP
     port.  This is for debugging the *guest*, not QEMU.  Can do much
     more than read and write memory.

     README.rst in the LibVMI source tree suggests

     - LibVMI can already use this interface, but it's apparently slower
       than using its non-upstream patch.  How much?

     - It requires the user to configure his VMs to be started with QEMU
       command line option -s, which means you can't introspect a
       running VM you didn't prepare for it from the start.  Easy enough
       to improve: use the monitor command!

     gdbserver is only in HMP, probably because it's viewed as a
     debugging tool for human use.  LibVMI would be a programmatic user,
     so adding gdbserver to QMP could be justified.

3. Assuming existing interfaces won't do, how could a new one look like?

   * Special purpose TCP protocol, QMP command to start/stop the service

     This is the existing non-upstream solution.

     We can quibble over the protocol, e.g. its weird handling of read
     errors, but that's detail

   * Share guest memory with LibVMI somehow

     Still in the "crazy idea" stage.  Fish the memory out of
     /proc/$PID/mem?  Create a file QEMU and LibVMI can mmap?

     Writing has same synchronization problems as with multi-threaded
     TCG.  These are being addressed, but I have now idea how the
     solutions will translate.

I'm afraid this got a bit long.  Thanks for reading this far.

I hope we can work together and find a solution that satisfies LibVMI.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]