qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] QEMU patch to allow VM introspection via libvmi


From: Markus Armbruster
Subject: Re: [Qemu-devel] QEMU patch to allow VM introspection via libvmi
Date: Tue, 27 Oct 2015 17:11:49 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux)

Valerio Aimale <address@hidden> writes:

> On 10/27/15 9:00 AM, Markus Armbruster wrote:
>> Valerio Aimale <address@hidden> writes:
>>
>>> On 10/26/15 11:52 AM, Eduardo Habkost wrote:
>>>>
>>>> I was trying to advocate the use of a shared mmap'ed region. The sharing
>>>> would be two-ways (RW for both) between the QEMU virtualizer and the libvmi
>>>> process. I envision that there could be a QEMU command line argument, such
>>>> as "--mmap-guest-memory <filename>" Understand that Eric feels strongly the
>>>> libvmi client should own the file name - I have not forgotten that. When
>>>> that command line argument is given, as part of the guest initialization,
>>>> QEMU creates a file of size equal to the size of the guest memory 
>>>> containing
>>>> all zeros, mmaps that file to the guest memory with  PROT_READ|PROT_WRITE
>>>> and MAP_FILE|MAP_SHARED, then starts the guest.
>>>> This is basically what memory-backend-file (and the legacy -mem-path
>>>> option) already does today, but it unlinks the file just after opening
>>>> it. We can change it to accept a full filename and/or an option to make
>>>> it not unlink the file after opening it.
>>>>
>>>> I don't remember if memory-backend-file is usable without -numa, but we
>>>> could make it possible somehow.
>>> Eduardo, I did try this approach. It takes 2 line changes in exec.c:
>>> comment the unlink out, and making sure MAP_SHARED is used when
>>> -mem-path and -mem-prealloc are given. It works beautifully, and
>>> libvmi accesses are fast. However, the VM is slowed down to a crawl,
>>> obviously, because each RAM access by the VM triggers a page fault on
>>> the mmapped file. I don't think having a crawling VM is desirable, so
>>> this approach goes out the door.
>> Uh, I don't understand why "each RAM access by the VM triggers a page
>> fault".  Can you show us the patch you used?
> Sorry, too brief of an explanation. Every time the guest flips a byte in 
> physical RAM, I think that triggers a page write to the mmaped file. My 
> understanding is that, with MAP_SHARED, each write to RAM triggers a 
> file write, hence the slowness. These are the simple changes I made, to 
> test it - as a proof of concept.

Ah, that actually makes sense.  Thanks!

[...]



reply via email to

[Prev in Thread] Current Thread [Next in Thread]