qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH v2 00/17] tcg: tb_lock_removal redux v2


From: Emilio G. Cota
Subject: [Qemu-devel] [PATCH v2 00/17] tcg: tb_lock_removal redux v2
Date: Thu, 5 Apr 2018 22:12:51 -0400

v1: http://lists.gnu.org/archive/html/qemu-devel/2018-02/msg06499.html

Changes since v1:

- Add R-b's

- Rebase onto master

- qht_lookup_custom: move @func to be the last argument, which
  simplifies the new qht_lookup function. (I've kept R-b's tags
  here because this is a very simple change.)

- qht_insert: add an **existing argument and keep the bool return value,
  as suggested by Alex.

- Fix indentation of TB_FOR_EACH_TAGGED macro

- Add page_locked assertions, as suggested by Alex.

- Expand comment in tb_link_page and in docs/mttcg about parallel
  code insertion.

- Fix stale comment about tb_lock next to CF_INVALID

- Fix stale comment in cpu_restore_state, as suggested by Alex.

There is only one checkpatch error for the entire series -- it is
a false positive.

You can fetch the tree from:
  https://github.com/cota/qemu/tree/tb-lock-removal-redux-v2

Thanks,

                Emilio
---
 accel/tcg/cpu-exec.c            |   96 ++-
 accel/tcg/cputlb.c              |    8 -
 accel/tcg/translate-all.c       | 1053 ++++++++++++++++++++++----------
 accel/tcg/translate-all.h       |    6 +-
 docs/devel/multi-thread-tcg.txt |   24 +-
 exec.c                          |   25 +-
 include/exec/cpu-common.h       |    2 +-
 include/exec/exec-all.h         |   51 +-
 include/exec/memory-internal.h  |    6 +-
 include/exec/tb-context.h       |    4 -
 include/qemu/qht.h              |   32 +-
 linux-user/main.c               |    3 -
 tcg/tcg.c                       |  205 +++++++
 tcg/tcg.h                       |   13 +-
 tests/qht-bench.c               |   18 +-
 tests/test-qht.c                |   23 +-
 util/qht.c                      |   41 +-
 17 files changed, 1133 insertions(+), 477 deletions(-)




reply via email to

[Prev in Thread] Current Thread [Next in Thread]