qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] QEMU patch to allow VM introspection via libvmi


From: Markus Armbruster
Subject: Re: [Qemu-devel] QEMU patch to allow VM introspection via libvmi
Date: Tue, 27 Oct 2015 16:00:00 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux)

Valerio Aimale <address@hidden> writes:

> On 10/26/15 11:52 AM, Eduardo Habkost wrote:
>>
>>
>> I was trying to advocate the use of a shared mmap'ed region. The sharing
>> would be two-ways (RW for both) between the QEMU virtualizer and the libvmi
>> process. I envision that there could be a QEMU command line argument, such
>> as "--mmap-guest-memory <filename>" Understand that Eric feels strongly the
>> libvmi client should own the file name - I have not forgotten that. When
>> that command line argument is given, as part of the guest initialization,
>> QEMU creates a file of size equal to the size of the guest memory containing
>> all zeros, mmaps that file to the guest memory with  PROT_READ|PROT_WRITE
>> and MAP_FILE|MAP_SHARED, then starts the guest.
>> This is basically what memory-backend-file (and the legacy -mem-path
>> option) already does today, but it unlinks the file just after opening
>> it. We can change it to accept a full filename and/or an option to make
>> it not unlink the file after opening it.
>>
>> I don't remember if memory-backend-file is usable without -numa, but we
>> could make it possible somehow.
> Eduardo, I did try this approach. It takes 2 line changes in exec.c:
> comment the unlink out, and making sure MAP_SHARED is used when
> -mem-path and -mem-prealloc are given. It works beautifully, and
> libvmi accesses are fast. However, the VM is slowed down to a crawl,
> obviously, because each RAM access by the VM triggers a page fault on
> the mmapped file. I don't think having a crawling VM is desirable, so
> this approach goes out the door.

Uh, I don't understand why "each RAM access by the VM triggers a page
fault".  Can you show us the patch you used?

> I think we're back at estimating the speed of other approaches as
> discussed previously:
>
> - via UNIX socket as per existing patch
> - via xp parsing the human readable xp output
> - via an xp-like command the returns memory content baseXX-encoded
> into a json string
> - via shared memory as per existing code and patch
>
> Any other?

Yes, the existing alternative LibVMI method via gdbserver should be
included in the comparison.

Naturally, any approach that does the actual work via QMP will be dog
slow as long as LibVMI launches virsh for each QMP command.  Fixable, as
Eric pointed out: use the libvirt API and link against libvirt.so
instead.  No idea how much work that'll be.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]