qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: KVM call agenda for Apr 27


From: Avi Kivity
Subject: [Qemu-devel] Re: KVM call agenda for Apr 27
Date: Tue, 27 Apr 2010 11:14:59 +0300
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.0.4-1.fc12 Thunderbird/3.0.4

On 04/27/2010 01:36 AM, Anthony Liguori wrote:

A few comments:

1) The problem was not block watermark itself but generating a notification on the watermark threshold. It's a heuristic and should be implemented based on polling block stats.

Polling for an event that never happens is bad engineering. What frequency do you poll? you're forcing the user to make a lose-lose tradeoff.

Otherwise, we'll be adding tons of events to qemu that we'll struggle to maintain.

That's not a valid reason to reject a user requirement. We may argue the requirement is bogus, or that the suggested implementation is wrong and point in a different direction, but saying that we may have to add more code in the future due to other requirements is ... well I can't find a word for it.


2) A block plugin doesn't solve the problem if it's just at the BlockDriverState level because it can't interact with qcow2.

Why not? We have a layered model. guest -> qcow2 -> plugin (sends event) -> raw-posix. Just need to insert the plugin at the appropriate layer.


3) For general block plugins, it's probably better to tackle userspace block devices. We have CUSE and FUSE already, a BUSE is a logical conclusion.

We also have an nbd client.

Here's another option: an nbd-like protocol that remotes all BlockDriver operations except read and write over a unix domain socket. The open operation returns an fd (SCM_RIGHTS strikes again) that is used for read and write. This can be used to implement snapshots over LVM, for example.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]