qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/2] Dump: introduce a Filesystem in Userspace


From: Nan Li
Subject: Re: [Qemu-devel] [PATCH 1/2] Dump: introduce a Filesystem in Userspace
Date: Tue, 10 May 2016 05:26:02 -0600

>>> On 5/10/2016 at 5:42 PM, Petr Tesarik <address@hidden> wrote:
> On Tue, 10 May 2016 09:48:48 +0100
> "Daniel P. Berrange" <address@hidden> wrote:
> 
>> On Tue, May 10, 2016 at 07:59:41AM +0200, Petr Tesarik wrote:
>> > On Mon, 9 May 2016 09:52:28 ‑0600
>> > Eric Blake <address@hidden> wrote:
>> > 
>> > > On 05/07/2016 05:32 PM, Nan Li wrote:
>> > > > When running the command "dump‑guest‑memory", we usually need a large 
>> > > > space
>> > > > of storage to save the dumpfile into disk. It costs not only much time 
>> > > > to
>> > > > save a file in some of hard disks, but also costs limited storage in 
> host.
>> > > > In order to reduce the saving time and make it convenient for users to 
> dump
>> > > > the guest memory, we introduce a Filesystem in Userspace (FUSE) to 
>> > > > save 
> the
>> > > > dump file in RAM. It is selectable in the configure file, adding a 
> compiling
>> > > > of package "fuse‑devel". It doesn't change the way of dumping guest 
> memory.
>> > > 
>> > > Why introduce FUSE? Can we reuse NBD instead?
>> > 
>> > Let me answer this one, because it's me who came up with the idea,
>> > although I wasn't involved in the actual implementation.
>> > 
>> > The idea is to get something more like Linux's /proc/kcore, but for a
>> > QEMU guest. So, yes, the same idea could be implemented as a standalone
>> > application which talks to QEMU using the gdb remote protocol and
>> > exposes the data in a structured form through a FUSE filesystem.
>> > 
>> > However, the performance of such a solution cannot get even close to
>> > that of exposing the data directly from QEMU. Maybe it's still the best
>> > way to start the project...
>> 
>> IIUC, the performance penalty will be related to the copying of guest
>> RAM. All the other supplementary information you want (register state
>> etc) is low volume, so should not be performance critical to copy that
>> over the QMP monitor command or via libvirt monitor command passthrough.
> 
> Agreed. Even if the number of guest CPUs ever rises to the order of
> thousands, the additional impact is negligible.
> 
>> So if want to have an external program provide a /proc/kcore like
>> service via FUSE, the problem we need to solve here is a mechanism
>> for providing efficient access to QEMU memory.
> 
> Indeed. This is the main reason for tinkering with QEMU sources at all.
> 
>> I think this can be done quite simply by having QEMU guest RAM exposed
>> via tmpfs or hugetlbfs as appropriate. This approach is what is already
>> used for the vhost‑user network backend in an external process which
>> likewise needs copy‑free access to guest RAM pages.
> 
> Ha! We didn't realize this is an option. We can certainly have a look
> at implementing a generic mechanism for mapping QEMU guest RAM from
> another process on the host. And yes, this would address any
> performance concerns nicely.
> 

Agreed. It sounds a good option. I will try to investigate it.

>> Obviously this requires that users start QEMU in this particular setup
>> for RAM, but I don't think that's a particularly onerous requirement
>> as any non‑trivial management application will already know how to do
>> this.
> 
> Agreed. This is not an issue. Our main target would be libvirt, which
> adds quite a bit of infrastructure already. ;‑)
> 
> Thanks for your thoughts!
> 
> Petr T

Thanks very much for all your thoughts.

Nan Li




reply via email to

[Prev in Thread] Current Thread [Next in Thread]