qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Block I/O optimizations


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] Block I/O optimizations
Date: Tue, 26 Feb 2013 17:45:30 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Mon, Feb 25, 2013 at 08:45:58PM +0200, Abel Gordon wrote:
> 
> 
> Stefan Hajnoczi <address@hidden> wrote on wrote on 25/02/2013 02:50:56
> PM:
> > You also create a
> > privileged thread that has access to all guests on the host - a security
> > bug here compromises all guests.  This can be fine for private
> > deployments where guests are trusted.  For untrusted guests and public
> > clouds it seems risky.
> 
> But is this significantly different than any other security bug in the
> host,
> qemu, kvm....? If you perform the I/O virtualization in a separate (not
> qemu)
> process, you have a significantly smaller, self-contained and bounded
> trusted computing base (TCB) from source code perspective as opposed to
> a single huge user-space process where it's very difficult to define
> boundaries and find potential security holes.

I disagree here.

The QEMU process is no more privileged than guest ring 0.  It can only
mess with resources that the guest itself has access to (CPU, disk,
network).

The QEMU process cannot access other guests.  SELinux locks it down so
it cannot access host files or other resources.

This is a big difference compared to kvm.ko which has host ring 0
access.  And it's still a big difference compared to a shared storage
process with access to all guest disks.

> > Maybe a hybrid approach is possible where exit-less is possible but I/O
> > emulation still happens in per-guest userspace threads.  Not sure how
> > much performance can be retained by doing that - e.g. a kernel driver
> > that allows processes to bind an eventfd to a memory notification area.
> > The kernel driver does polling in a single thread and signals eventfds.
> > Userspace threads do the actual I/O emulation.
> 
> Sounds interesting... however, once the userspace thread runs the driver
> loses
> control (assuming you don't have spare cores).
> I mean, a userspace I/O thread will probably consume all
> its time slice while the driver may prefer to assign less (or more) cycles
> to a
> specific I/O thread based on the ongoing activity of all the VMs.
> 
> Using a shared-thread, you can optimize the linux scheduler to handle
> virtual/emulated I/O while you actually don't modify the kernel scheduler
> code.

Can you explain details on fine-grained I/O scheduling or post some
code?

Maybe you're thinking about things like a "budget" (like netpoll in
Linux), where the userspace I/O thread should only process n requests
each time it is kicked.  This way we avoid hogging resources.  I see
nothing which prevents a hybrid model from implementing budgets.

It's hard to discuss further without details.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]