qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: A new direction for vmchannel?


From: Anthony Liguori
Subject: [Qemu-devel] Re: A new direction for vmchannel?
Date: Sat, 24 Jan 2009 11:52:06 -0600
User-agent: Thunderbird 2.0.0.19 (X11/20090105)

Daniel P. Berrange wrote:
On Fri, Jan 23, 2009 at 08:45:33AM -0600, Anthony Liguori wrote:
The userspace configuration aspects of the current implementation of vmchannel are pretty annoying. Moreover, we would like to make use of something like vmchannel in a kernel driver and I fear that it's going to be difficult to do that.

So here's an alternative proposal.

Around 2.6.27ish, Eric and I added 9p over virtio support to v9fs. This is all upstream. We backported the v9fs modules all the way back to 2.6.18. I have a 9p client and server library and patches available for QEMU. We were using this for a file system pass through but we could also use it as a synthetic file system in the guest (like sysfs).

The guest would just have to mount a directory in a well known location, and then you could get vmchannel like semantics by just opening a file read/write. Better yet though would be if we actually exposed vmchannel as 9p so that management applications could implement sysfs-like hierarchies.

I think there could be a great deal of utility in something like. For portability to Windows (if an app cared), it would have to access the mount point through a library of some sort. We would need a Windows virtio-9p driver that exposed the 9p session down to userspace. We could then use our 9p client library in the portability library for Windows.

Virtually all of the code is available for this today, the kernel bits are already upstream, there's a reasonable story for Windows, and there's very little that the guest can do to get in the way of things.

The only thing that could potentially be an issue is SELinux. I assume you'd have to do an SELinux policy for the guest application anyway though so it shouldn't be a problem.

For use cases where you are exposing metadata from the host to the guest
this would be a very convenient approach indeed. As asked elsewhere in this
thread, my main thought would be about how well it suits a application that
wants a generic stream based connection between host & guest ? Efficient integration into a poll(2) based event loop would be key to that.

You mean for a very large number of files (determining which property has changed?).

The way you would do this today, without special inotify support, is to have a special file in the hierarchy called "change-notify". You can write a list of deliminated files and whenever one of those files becomes readable, the file itself will become readable (returning a deliminated list of files that have changed since last read).

This way, you get a file you can select on for a very large number of files. That said, it would be nice to add proper inotify support to v9fs too.

 Regular
files don't offer that kind of ability ordinarily, and not clear whether fifo's would be provided for in p9fs between host/guest ?

I'm going to put together a patch this weekend and I'll include a streaming example. Basically, you just ignore the file offset and read/write to the file to your heart's content.

Regards,

Anthony Liguori

In any case, if we have a usable p9fs backend for QEMU, I don't see why we shouldn't integrate that in QEMU, regardless of whether it serves the more general vmchannel use cases. Sharing filesystems is an interesting idea in its own right after all.
I also really don't like guest deployment / configuration complexity that
is accompanying the NIC device based vmchannel approach. There are just far too many things that can go wrong with it wrt the guest OS & apps using networking. IMHO, the core motivation of vmchannel is to have a secure guest <-> host data transport that can we relied upon from the moment guest userspace appears, preferrably with zero guest admin configuration requirements, and no need for authentication keys to establish guest identity. UNIX domain sockets are a great example of this ideal, providing a reliable data stream for the localhost before network makes any appearance,
with builtin client authentication via SCM_CREDS.

Regards,
Daniel





reply via email to

[Prev in Thread] Current Thread [Next in Thread]