qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Better qemu/kvm defaults (was Re: [RFC PATCH 0/4] Gang


From: Ronen Hod
Subject: Re: [Qemu-devel] Better qemu/kvm defaults (was Re: [RFC PATCH 0/4] Gang scheduling in CFS)
Date: Sun, 01 Jan 2012 16:01:06 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:9.0) Gecko/20111222 Thunderbird/9.0

On 01/01/2012 12:16 PM, Dor Laor wrote:
On 12/29/2011 06:16 PM, Anthony Liguori wrote:
On 12/29/2011 10:07 AM, Dor Laor wrote:
On 12/26/2011 11:05 AM, Avi Kivity wrote:
On 12/26/2011 05:14 AM, Nikunj A Dadhania wrote:

btw you can get an additional speedup by enabling x2apic, for
default_send_IPI_mask_logical().

In the host?


In the host, for the guest:

qemu -cpu ...,+x2apic


It seems to me that we should improve our default flags.
So many times users fail to submit the proper huge command-line
options that we
require. Honestly, we can't blame them, there are so many flags and so
many use
cases its just too hard to get it right for humans.

You might want to take into account migration considerations. I.e., the target host's optimal setup. Also, we need to beware of too much automation, since hardware changes might void Windows license activations. Some of the parameters will depend on dynamic factors such as the total guest's nCPUs, mem, sharing (KSM), or whatever. As a minimum, we can automatically suggest the qemu parameters and the host setup.

Ronen.


I propose a basic idea and folks are welcome to discuss it:

1. Improve qemu/kvm defaults
Break the current backward compatibility (but add a --default-
backward-compat-mode) and set better values for:
- rtc slew time

What do you specifically mean?

-rtc localtime,driftfix=slew


- cache=none

I'm not sure I see this as a "better default" particularly since
O_DIRECT fails on certain file systems. I think we really need to let
WCE be toggable from the guest and then have a caching mode independent
of WCE. We then need some heuristics to only enable cache=off when we
know it's safe.

cache=none is still faster then it has the FS support.
qemu can test-run O_DIRECT and fall back to cache mode or just test the filesystem capabilities.


- x2apic, maybe enhance qemu64 or move to -cpu host?

Alex posted a patch for this. I'm planning on merging it although so far
no one has chimed up either way.

- aio=native|threads (auto-sense?)

aio=native is unsafe to default because linux-aio is just fubar. It
falls back to synchronous I/O if the underlying filesystem doesn't
support aio. There's no way in userspace to problem if it's actually
supported or not either...

Can we test-run this too? Maybe as a separate qemu mode or even binary that given a qemu cmdline, it will try to suggest better parameters?

- use virtio devices by default

I don't think this is realistic since appropriately licensed signed
virtio drivers do not exist for Windows. (Please note the phrase
"appropriately licensed signed").

What's the percentage of qemu invocation w/ windows guest and a short cmd line? My hunch is that plain short cmdline indicates a developer and probably they'll use linux guest.


- more?

Different defaults may be picked automatically when TCG|KVM used.

2. External hardening configuration file kept in qemu.git
For non qemu/kvm specific definitions like the io scheduler we
should maintain a script in our tree that sets/sense the optimal
settings of the host kernel (maybe similar one for the guest).

What are "appropriate host settings" and why aren't we suggesting that
distros and/or upstream just set them by default?

It's hard to set the right default for a distribution since the same distro should optimize for various usages of the same OS. For example, Fedora has tuned-adm w/ available profiles:
- desktop-powersave
- server-powersave
- enterprise-storage
- spindown-disk
- laptop-battery-powersave
- default
- throughput-performance
- latency-performance
- laptop-ac-powersave

We need to keep on recommending the best profile for virtualization, for Fedora I think it either enterprise-storage and maybe throughput-performance.

If we have a such a script, it can call the matching tuned profile instead of tweaking every /sys option.


Regards,

Anthony Liguori

HTH,
Dor









reply via email to

[Prev in Thread] Current Thread [Next in Thread]