qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC][PATCH 0/2] uq/master: Basic MSI support for in-ke


From: Jan Kiszka
Subject: Re: [Qemu-devel] [RFC][PATCH 0/2] uq/master: Basic MSI support for in-kernel irqchip mode
Date: Wed, 28 Mar 2012 19:18:46 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2012-03-28 19:06, Michael S. Tsirkin wrote:
> On Wed, Mar 28, 2012 at 06:53:01PM +0200, Jan Kiszka wrote:
>> On 2012-03-28 18:30, Michael S. Tsirkin wrote:
>>> On Wed, Mar 28, 2012 at 06:00:03PM +0200, Jan Kiszka wrote:
>>>> On 2012-03-28 17:43, Michael S. Tsirkin wrote:
>>>>> On Wed, Mar 28, 2012 at 01:36:15PM +0200, Jan Kiszka wrote:
>>>>>> On 2012-03-28 13:31, Michael S. Tsirkin wrote:
>>>>>>>>>>> Also, how would this support irqfd in the future? Will we have to
>>>>>>>>>>> rip it all out and replace with per-device tracking that we
>>>>>>>>>>> have today?
>>>>>>>>>>
>>>>>>>>>> Irqfd and kvm device assignment will require additional interfaces 
>>>>>>>>>> (of
>>>>>>>>>> the kvm core in QEMU) via which you will be able to request stable
>>>>>>>>>> routes from such sources to specified MSIs. That will be widely
>>>>>>>>>> orthogonal to what is done in these patches here.
>>>>>>>>>
>>>>>>>>> Yes but not exactly as they will conflict for resources, right?
>>>>>>>>> How do you plan to solve this?
>>>>>>>>
>>>>>>>> As done in my original series: If a static route requires a pseudo GSI
>>>>>>>> and there are none free, we simply flush the dynamic MSI routes.
>>>>>>>
>>>>>>> Right. So static routes take precedence. This means that in effect
>>>>>>> we will have two APIs in qemu: for fast MSIs and for slow ones,
>>>>>>> the advantage of the slow APIs being that they are easier to use,
>>>>>>> right?
>>>>>>
>>>>>> We will have two APIs depending on the source of the MSI. Special
>>>>>> sources are the exception while emulated ones are the majority. And for
>>>>>> the latter we should try very hard to keep things simple and clean.
>>>>>>
>>>>>> Jan
>>>>>
>>>>> I assume this means yes :) So how about we replace the hash table with a
>>>>> single GSI reserved for this purpose, and use that for each interrupt?
>>>>> This will work fine for slow paths such as hotplug controller, yes it
>>>>> will be slow but *predictably* slow.
>>>>
>>>> AHCI, HDA, virtio-block, and every other userspace MSI user will suffer
>>>> - I can't imagine you really want this. :)
>>>
>>> These should use static GSI routes or the new API if it exists.
>>
>> There will be an API to request an irqfd and associate it with a MSI
>> message and the same for an assigned device IRQ/MSI vector. But none for
>> userspace generated messages. That would mean hooking deep into the MSI
>> layer again - or even the devices themselves.
> 
> What I had in mind is an API like
> 
> MSIVector *get_msi_vector(PCIDevice *)
> put_msi_vector(MSIVector *)
> 
> and then devices just need to keep an array of these vectors
> around. Is this really that bad?

Yes, as the points to get and put means tracking what goes on in the
vectors beyond what is actually needed. The above is just the beginning.
And, again, it assumes that only PCI devices can send MSIs, which is not
true.

You may recall my first series which tried to hid a bit of this mess
behind the MSIRoutingCache concept. It was way more complex and invasive
than the new approach.

> 
>>> Changing GSI routing when AHCI wants to send an interrupt
>>> will cause performance trouble in unpredictable ways:
>>> it triggers RCU write side and that can be *very* slow.
>>
>> That's why we will have direct MSI injection for them. This here is just
>> to make it work without that feature in a reasonable, non-intrusive way.
>>
>> If it really hurts that much, we need to invest more in avoiding cache
>> flushes. But I'm skeptical there is much to gain compared to the current
>> qemu-kvm model: every vector change that results in a route change
>> passes the RCU write side - and serializes other QEMU userspace exists.
> 
> Yes vector changes are slow but the cache would make route changes
> on interrupt injection, as opposed to rebalancing.

What's the difference? It hits the whole VM (via the userspace
hypervisor part) at none-predictable points during runtime.

> 
>> That _is_ already a bottleneck. Every MSI IRQ balancing between CPUs in
>> the guest should trigger this e.g.
>>
>> What I would really like to avoid is introducing invasive abstractions
>> and hooks to QEMU that optimize for a scenario that is obsolete mid to
>> long term.
> 
> If Avi adds the dynamic API, I'm fine with this hack as a fallback.
> Let's see what happens ...

I'll refresh that KVM patch and give it another try.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]