qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] MTTCG Tasks (kvmforum summary)


From: Mark Burton
Subject: Re: [Qemu-devel] MTTCG Tasks (kvmforum summary)
Date: Fri, 4 Sep 2015 12:18:38 +0200

> On 4 Sep 2015, at 11:41, Edgar E. Iglesias <address@hidden> wrote:
> 
> On Fri, Sep 04, 2015 at 11:25:33AM +0200, Paolo Bonzini wrote:
>> 
>> 
>> On 04/09/2015 09:49, Alex Bennée wrote:
>>> * Signal free qemu_cpu_kick (Paolo)
>>> 
>>> I don't know much about this patch set but I assume this avoids the need
>>> to catch signals and longjmp about just to wake up?
>> 
>> It was part of Fred's patches, so I've extracted it to its own series.
>> Removing 150 lines of code can't hurt.
>> 
>>> * Memory barrier support (need RFC for discussion)
>>> 
>>> I came to KVM forum with a back of the envelope idea we could implement
>>> one or two barrier ops (acquire/release?). Various suggestions of other
>>> types of memory behaviour have been suggested.
>>> 
>>> I'll try to pull together an RFC patch with design outline for
>>> discussion. It would be nice to be able to demonstrate barrier failures
>>> in my test cases as well ;-)
>> 
>> Emilio has something about it in his own MTTCG implementation.
>> 
>>> * longjmp in cpu_exec
>>> 
>>> Paolo is fairly sure that if you take page faults while IRQs happening
>>> problems will occur with cpu->interrupt_request. Does it need to take
>>> the BQL?
>>> 
>>> I'd like to see if we can get a torture test to stress this code
>>> although it will require IPI support in the unit tests.
>> 
>> It's x86-specific (hardware interrupts push to the stack and can cause a
>> page fault or other exception), so a unit test can be written for it.
>> 
>>> * tlb_flush and dmb behaviour (am I waiting for TLB flush?)
>>> 
>>> I think this means we need explicit memory barriers to sync updates to
>>> the tlb.
>> 
>> Yes.
>> 
>>> * tb_find_fast outside the lock
>>> 
>>> Currently it is a big performance win as the tb_find_fast has a lot of
>>> contention with other threads. However there is concern it needs to be
>>> properly protected.
>> 
>> This, BTW, can be done for user-mode emulation first, so it can go in
>> early.  Same for RCU-ized code_gen_buffer.
>> 
>>> * What to do about icount?
>>> 
>>> What is the impact of multi-thread on icount? Do we need to disable it
>>> for MTTCG or can it be correct per-cpu? Can it be updated lock-step?
>>> 
>>> We need some input from the guys that use icount the most.
>> 
>> That means Edgar. :)
> 
> Hi!
> 
> IMO it would be nice if we could run the cores in some kind of lock-step
> with a configurable amount of instructions that they can run ahead
> of time (X).
> 
> For example, if X is 10000, every thread/core would checkpoint at
> 10000 insn boundaries and wait for other cores. Between these
> checkpoints, the cores will not be in sync. We might need to
> consider synchronizing at I/O accesses aswell to avoid weird
> timing issues when reading counter registers for example.
> 
> Of course the devil will be in the details but an approach roughly
> like that sounds useful to me.

And “works" in other domains.
Theoretically we dont need to sync at IO (Dynamic quantums), for most systems 
that have ’normal' IO its normally less efficient I believe. However, the 
trouble is, the user typically doesn’t know, and mucking about with quantum 
lengths, dynamic quantum switches etc is probably a royal pain in the butt. And 
if you dont set your quantum right, the thing will run really slowly (or will 
break)… 

The choices are a rock or a hard place. Dynamic quantums risk to be slow 
(you’ll be forcing an expensive ’sync’ - all CPU’s will have to exit etc) on 
each IO access from each core…. not great. Syncing with host time (e.g. each 
CPU tries to sync with host clock as best it can) will fail when one or other 
CPU can’t keep up…. In the end you end up with leaving the user with a nice 
long bit of string and a message saying “hang yourself here”. 

Cheers
Mark.

> 
> Are there any other ideas floating around that may be better?
> 
> BTW, where can I find the latest series? Is it on a git-repo/branch
> somewhere?
> 
> Best regards,
> Edgar


         +44 (0)20 7100 3485 x 210
 +33 (0)5 33 52 01 77x 210

        +33 (0)603762104
        mark.burton







reply via email to

[Prev in Thread] Current Thread [Next in Thread]