qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] queue_work proposal


From: Glauber Costa
Subject: Re: [Qemu-devel] [RFC] queue_work proposal
Date: Thu, 3 Sep 2009 09:11:35 -0300
User-agent: Jack Bauer

On Thu, Sep 03, 2009 at 02:32:45PM +0300, Avi Kivity wrote:
> On 09/03/2009 02:15 PM, Glauber Costa wrote:
>>
>>> on_vcpu() and queue_work() are fundamentally different (yes, I see the
>>> wait parameter, and I think there should be two separate functions for
>>> such different behaviours).
>>>      
>> Therefore, the name change. The exact on_vcpu behaviour, however, can be
>> implemented ontop of queue_work().
>
> Will there be any use for asynchronous queue_work()?
>
> It's a dangerous API.
Initially, I thought we could use it for batching, if we forced a flush in the 
end of
a sequence of operations. This can makes things faster if we are doing a bunch 
of calls
in a row, from the wrong thread.

>
>> Instead of doing that, I opted for using it
>> implicitly inside kvm_vcpu_ioctl, to guarantee that vcpu ioctls will always 
>> run
>> on the right thread context.
>
> I think it's reasonable to demand that whoever calls kvm_vcpu_ioctl()  
> know what they are doing (and they'll get surprising results if it  
> switches threads implicitly).
I respectfully disagree. Not that I want people not to know what they are 
doing, but I
believe that, forcing something that can only run in a specific thread to be 
run there,
provides us with a much saner interface, that will make code a lot more 
readable and 
maintainable.

>
>> Looking at qemu-kvm, it seems that there are a couple
>> of other functions that are not ioctls, and need on_vcpu semantics. Using 
>> them becomes
>> a simple matter of doing:
>>
>>     queue_work(env, func, data, 1);
>>
>> I really don't see the big difference you point. They are both there to 
>> force a specific
>> function to be executed in the right thread context.
>>    
>
> One of them is synchronous, meaning the data can live on stack and no  
> special synchronization is needed, while the other is synchronous,  
> meaning explicit memory management and end-of-work synchronization is  
> needed.

I will assume you meant "the other is assynchronous". It does not need to be.
I though about including the assynchronous version in this RFC to let doors
open for performance improvements *if* we needed them. But again: the absolute
majority of the calls will be local. So it is not that important.

>
>>> Why do we need queue_work() in the first place?
>>>      
>> To force a function to be executed in the correct thread context.
>> Why do we need on_vcpu in the first place?
>>    
>
> on_vcpu() is a subset of queue_work().  I meant, why to we need the  
> extra functionality?
As I said, if you oppose it hardly, we don't really need to.

>
>>> Is there a way to limit the queue size to prevent overflow?
>>>      
>> It can be, but it gets awkward. What do you do when you want a function 
>> needs to execute
>> on another thread, but you can't? Block it? Refuse?
>>    
>
> What if the thread is busy?  You grow the queue to an unbounded size?
>
>> We could pick one, but I see no need. The vast majority of work will never 
>> get queued,
>> since we'll be in the right context already.
>>    
>
> A more powerful API comes with increased responsibilities.
You suddenly sounds like spider man.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]