qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCH][v2] Align file accesses with cache=off (O_D


From: Avi Kivity
Subject: Re: [Qemu-devel] Re: [PATCH][v2] Align file accesses with cache=off (O_DIRECT)
Date: Wed, 21 May 2008 19:44:14 +0300
User-agent: Thunderbird 2.0.0.14 (X11/20080501)

Jamie Lokier wrote:
Avi Kivity wrote:
Here's a summary of the use cases I saw so far:

- casual use, no critical data: write back cache

- backing file shared among many guests: read-only, cached

- desktop system, but don't lose my data: O_SYNC
(significant resources on the host)

- dedicated virtualization engine: O_DIRECT
(most host resources assigned to guests)

Sounds alright, but on _my_ desktop system (a laptop), I would use O_DIRECT.

There isn't enough RAM in my system to be happy duplicating data in
guests and hosts at the same time.  VMs are quite demanding on RAM.


Sure, if you're low on resources, and aren't rebooting often, that's the right thing to do.

However, if you find a way to map host cached pages into the guest
without copying - so sharing the RAM - that would be excellent.  It
can be done in principle, by remapping pages to satisfy IDE/SCSI DMA
requests.  I don't know if it would be fast enough.  Perhaps it would
work better in KVM than QEMU.

Sounds like a memory management nightmare. With mmu notifiers (or plain qemu), though, it can be done. Have the backing file also contain an area for guest RAM. Use a nonlinear mapping to map this area as guest memory. If the guest issues a properly-aligned read, call remap_file_pages() for that page, and write-protect it. When you get a protection violation (as the guest writes to that memory), copy it to the RAM area and remap it again.

I don't think remap_file_pages() supports different protections in a single VMA; that could kill the whole idea.

--
error compiling committee.c: too many arguments to function





reply via email to

[Prev in Thread] Current Thread [Next in Thread]