qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/3] Introduce threadlets


From: Venkateswararao Jujjuri (JV)
Subject: Re: [Qemu-devel] [PATCH 1/3] Introduce threadlets
Date: Tue, 19 Oct 2010 20:19:38 -0700
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.9) Gecko/20100915 Thunderbird/3.1.4

On 10/19/2010 2:36 PM, Anthony Liguori wrote:
> On 10/19/2010 01:36 PM, Balbir Singh wrote:
>>> +    qemu_mutex_lock(&(queue->lock));
>>> +    while (1) {
>>> +        ThreadletWork *work;
>>> +        int ret = 0;
>>> +
>>> +        while (QTAILQ_EMPTY(&(queue->request_list))&&
>>> +               (ret != ETIMEDOUT)) {
>>> +            ret = qemu_cond_timedwait(&(queue->cond),
>>> +                    &(queue->lock), 10*100000);
>>>      
>> Ewww... what is 10*100000, can we use something more meaningful
>> please?
>>    
> 
> A define is fine but honestly, it's pretty darn obvious what it means...
> 
>>> +        }
>>> +
>>> +        assert(queue->idle_threads != 0);
>>>      
>> This assertion holds because we believe one of the idle_threads
>> actually did the dequeuing, right?
>>    
> 
> An idle thread is a thread is one that is not doing work.  At this point in 
> the
> code, we are not doing any work (yet) so if idle_threads count is zero,
> something is horribly wrong.  We're also going to unconditionally decrement in
> the future code path which means that if idle_threads is 0, it's going to 
> become
> -1.
> 
> The use of idle_thread is to detect whether it's necessary to spawn an
> additional thread.
> 
>>> +        if (QTAILQ_EMPTY(&(queue->request_list))) {
>>> +            if (queue->cur_threads>  queue->min_threads) {
>>> +                /* We retain the minimum number of threads */
>>> +                break;
>>> +            }
>>> +        } else {
>>> +            work = QTAILQ_FIRST(&(queue->request_list));
>>> +            QTAILQ_REMOVE(&(queue->request_list), work, node);
>>> +
>>> +            queue->idle_threads--;
>>> +            qemu_mutex_unlock(&(queue->lock));
>>> +
>>> +            /* execute the work function */
>>> +            work->func(work);
>>> +
>>> +            qemu_mutex_lock(&(queue->lock));
>>> +            queue->idle_threads++;
>>> +        }
>>> +    }
>>> +
>>> +    queue->idle_threads--;
>>> +    queue->cur_threads--;
>>> +    qemu_mutex_unlock(&(queue->lock));
>>> +
>>> +    return NULL;
>>>      
>> Does anybody do a join on the exiting thread from the pool?
>>    
> 
> No.  The thread is created in a detached state.
> 
>>> +}
>>> +
>>> +static void spawn_threadlet(ThreadletQueue *queue)
>>> +{
>>> +    QemuThread thread;
>>> +
>>> +    queue->cur_threads++;
>>> +    queue->idle_threads++;
>>> +
>>> +    qemu_thread_create(&thread, threadlet_worker, queue);
>>>      
>>   
>>> +}
>>> +
>>> +/**
>>> + * submit_threadletwork_to_queue: Submit a new task to a private queue to 
>>> be
>>> + *                            executed asynchronously.
>>> + * @queue: Per-subsystem private queue to which the new task needs
>>> + *         to be submitted.
>>> + * @work: Contains information about the task that needs to be submitted.
>>> + */
>>> +void submit_threadletwork_to_queue(ThreadletQueue *queue, ThreadletWork 
>>> *work)
>>> +{
>>> +    qemu_mutex_lock(&(queue->lock));
>>> +    if (queue->idle_threads == 0&&  queue->cur_threads<  
>>> queue->max_threads) {
>>> +        spawn_threadlet(queue);
>>>      
>> So we hold queue->lock, spawn the thread, the spawned thread tries to
>> acquire queue->lock
>>    
> 
> Yup.
> 
>>> +    }
>>> +    QTAILQ_INSERT_TAIL(&(queue->request_list), work, node);
>>> +    qemu_mutex_unlock(&(queue->lock));
>>> +    qemu_cond_signal(&(queue->cond));
>>>      
>> In the case that we just spawned the threadlet, the cond_signal is
>> spurious. If we need predictable scheduling behaviour,
>> qemu_cond_signal needs to happen with queue->lock held.
>>    
> 
> It doesn't really affect predictability..
> 
>> I'd rewrite the function as
>>
>> /**
>>   * submit_threadletwork_to_queue: Submit a new task to a private queue to be
>>   *                            executed asynchronously.
>>   * @queue: Per-subsystem private queue to which the new task needs
>>   *         to be submitted.
>>   * @work: Contains information about the task that needs to be submitted.
>>   */
>> void submit_threadletwork_to_queue(ThreadletQueue *queue, ThreadletWork 
>> *work)
>> {
>>      qemu_mutex_lock(&(queue->lock));
>>      if (queue->idle_threads == 0&&  (queue->cur_threads<  
>> queue->max_threads)) {
>>          spawn_threadlet(queue);
>>      } else {
>>          qemu_cond_signal(&(queue->cond));
>>      }
>>      QTAILQ_INSERT_TAIL(&(queue->request_list), work, node);
>>      qemu_mutex_unlock(&(queue->lock));
>> }
>>    
> 
> I think this is a lot more fragile.  You're relying on the fact that signal 
> will
> not cause the signalled thread to actually awaken until we release the lock 
> and
> doing work after signalling that the signalled thread needs to be completed
> before it wakes up.

Given that qemu_cond_timedwait() need to get the queue->lock before returning
the singalled thread will wakeup and wait on the queue->lock.

- JV

> 
> I think you're a lot more robust in the long term if you treat condition
> signalling as a hand off point because it makes the code a lot more explicit
> about what's happening.
> 
>>> +/**
>>> + * submit_threadletwork: Submit to the global queue a new task to be 
>>> executed
>>> + *                   asynchronously.
>>> + * @work: Contains information about the task that needs to be submitted.
>>> + */
>>> +void submit_threadletwork(ThreadletWork *work)
>>> +{
>>> +    if (unlikely(!globalqueue_init)) {
>>> +        threadlet_queue_init(&globalqueue, MAX_GLOBAL_THREADS,
>>> +                                MIN_GLOBAL_THREADS);
>>> +        globalqueue_init = 1;
>>> +    }
>>>      
>> What protects globalqueue_init?
>>    
> 
> qemu_mutex, and that unlikely is almost certainly a premature optimization.
> 
> Regards,
> 
> Anthony Liguori
> 
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]