qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] versatile: Push lsi initialization to the end


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH] versatile: Push lsi initialization to the end
Date: Mon, 08 Oct 2012 18:39:01 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120911 Thunderbird/15.0.1

Il 08/10/2012 18:33, Peter Maydell ha scritto:
> On 5 October 2012 18:30, Jan Kiszka <address@hidden> wrote:
>> This is nasty, but there is no better way given current mux logic:
>>
>> As setting up the block device will trigger a qemu_bh_poll while there
>> are qemu_chr open events in the queue, we have to register the UARTs
>> and everything else that might be mux'ed first so that the right active
>> frontend is already registered when the bottom-half is finally
>> processed.
> 
> So I guess this comes down to what the semantics of bottom halves are.
> I can see two plausible options:
> 
>  1. bottom halves are a mechanism provided by our cpu/device
>     simulation framework, and so will never be run before the
>     simulation is fully initialised
>   * this means devices can register BHs which set irq lines,
>     send events to chr mux front ends etc etc
>   * it also means that device setup mustn't trigger a bh_poll
>     (so we'd need to track down the bit of the block device
>     setup that's causing this)
> 
>  2. bottom halves are a generic mechanism that you can use
>     not just as part of the simulation, and so BHs may run
>     as soon as they're registered
>   * this would let us use them for arbitrary purposes in init
>   * we'd need to audit and fix all the current uses to check
>     whether they're safe to run early or if they need to have
>     a 'do nothing if simulation not running' check

3. bottom halves are an internal concept to the block layer that has
been hijacked by device models.  The bottom half idea is that the code
would run as soon as the current code is done with this subsystem;
ideally, you would instead queue a work item in a thread-pool and the
code would block on the same fine-grained lock as the subsystem that
created the bottom half.  Work items from different subsystems would be
able to run concurrently---of course that's not too helpful while we
have a single lock for the whole iothread...

Stefan's work should be able to kill qemu_bh_new inside the block layer
(replacing it with aio_bh_new), so qemu_bh_new can be repurposed to
something that doesn't conflict with the block layer.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]