qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] global_mutex and multithread.


From: Mark Burton
Subject: Re: [Qemu-devel] global_mutex and multithread.
Date: Thu, 15 Jan 2015 20:07:46 +0100

Still in agony on this issue - I’ve CC’d Jan as his patch looks important…

the patch below would seem to offer by far and away the best result here. (If 
only we could get it working ;-) )
        it allows threads to proceed as we want them to, it means we dont have 
to ‘count’ the number of CPU’s that are executing code (and could therefor 
potentially access IO space)…

However - if we go this route -the current patch is only for x86. (apart from 
the fact that we still seem to land in a deadlock…)

One thing I wonder - why do we need to go to the extent of mutexing in the TCG 
like this? Why can’t you simply put a mutex get/release on the slow path? If 
the core is going to do ‘fast path’ access to the memory, - even if that memory 
was IO mapped - would it matter if it didn’t have the mutex?

(It would help - I think - if we understood why you believed this patch 
wouldn’t work with SMP - I thought that was to do with the ‘round-robin’ 
mechanism - we’ve removed that for multi-thread anyway - but I guess we may 
have missed something there?)

Cheers

Mark.


> On 15 Jan 2015, at 12:12, Paolo Bonzini <address@hidden> wrote:
> 
> [now with correct listserver address]
> 
> On 15/01/2015 11:25, Frederic Konrad wrote:
>> Hi everybody,
>> 
>> In case of multithread TCG what is the best way to handle
>> qemu_global_mutex?
>> We though to have one mutex per vcpu and then synchronize vcpu threads when
>> they exit (eg: in tcg_exec_all).
>> 
>> Is that making sense?
> 
> The basic ideas from Jan's patch in
> http://article.gmane.org/gmane.comp.emulators.qemu/118807 still apply.
> 
> RAM block reordering doesn't exist anymore, having been replaced with
> mru_block.
> 
> The patch reacquired the lock when entering MMIO or PIO emulation.
> That's enough while there is only one VCPU thread.
> 
> Once you have >1 VCPU thread you'll need the RCU work that I am slowly
> polishing and sending out.  That's because one device can change the
> memory map, and that will cause a tlb_flush for all CPUs in tcg_commit,
> and that's not thread-safe.
> 
> And later on, once devices start being converted to run outside the BQL,
> that can be changed to use new functions address_space_rw_unlocked /
> io_mem_read_unlocked / io_mem_write_unlocked.  Something like that is
> already visible at https://github.com/bonzini/qemu/commits/rcu (ignore
> patches after "kvm: Switch to unlocked MMIO").
> 
> Paolo
> 
> 
> 


         +44 (0)20 7100 3485 x 210
 +33 (0)5 33 52 01 77x 210

        +33 (0)603762104
        mark.burton




reply via email to

[Prev in Thread] Current Thread [Next in Thread]