qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Making cputlb.c operations safe for MTTCG


From: Alex Bennée
Subject: Re: [Qemu-devel] Making cputlb.c operations safe for MTTCG
Date: Tue, 27 Sep 2016 23:15:31 +0100
User-agent: mu4e 0.9.17; emacs 25.1.50.1

Paolo Bonzini <address@hidden> writes:

> On 02/08/2016 08:37, Alex Bennée wrote:
>>> - in notdirty_mem_write, care must be put in the ordering of
>>> tb_invalidate_phys_page_fast (which itself calls tlb_unprotect_code and
>>> takes the tb_lock in tb_invalidate_phys_page_range) and tlb_set_dirty.
>>> At least it seems to me that the call to tb_invalidate_phys_page_fast
>>> should be after the write, but that's not all.  Perhaps merge this part
>>> of notdirty_mem_write:
>
> I looked at it again and you are already doing the right thing in patch 19.
> It's possible to simplify it a bit though like this:
>
> diff --git a/exec.c b/exec.c
> index c8389f9..7850c39 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -1944,9 +1944,6 @@ ram_addr_t qemu_ram_addr_from_host(void *ptr)
>  static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
>                                 uint64_t val, unsigned size)
>  {
> -    if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) {
> -        tb_invalidate_phys_page_fast(ram_addr, size);
> -    }
>      switch (size) {
>      case 1:
>          stb_p(qemu_map_ram_ptr(NULL, ram_addr), val);
> @@ -1960,11 +1957,19 @@ static void notdirty_mem_write(void *opaque, hwaddr 
> ram_addr,
>       */
>      cpu_physical_memory_set_dirty_range(ram_addr, size,
>                                          DIRTY_CLIENTS_NOCODE);
> +    tb_lock();
> +    if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) {
> +        /* tb_invalidate_phys_page_range will call tlb_unprotect_code
> +         * once the last TB in this page is gone.
> +         */
> +        tb_invalidate_phys_page_fast(ram_addr, size);
> +    }
>      /* we remove the notdirty callback only if the code has been
>         flushed */
>      if (!cpu_physical_memory_is_clean(ram_addr)) {
>          tlb_set_dirty(current_cpu, current_cpu->mem_io_vaddr);
>      }
> +    tb_unlock();
>  }
>
>  static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
>
>
> Anyhow, the next step is to merge either cmpxchg-based atomics
> or iothread-free single-threaded TCG.  Either will do. :)

By iothread-free single-threaded TCG you mean dropping the need to grab
the BQL when we start the TCG thread and making the BQL purely an
on-demand/when needed thing?

The cmpxchg stuff is looking good to me - I still have to do a pass over
rth's patch set since he re-based on async safe work. In fact once your
updated PULL req is in even better ;-)

> I think that even iothread-free single-threaded TCG requires this
> TLB stuff, because the iothread's address_space_write (and hence
> invalidate_and_set_dirty) can race against the TCG thread's
> code generation.

Yes.

>
> Thanks,
>
> Paolo


--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]