[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Libunwind-devel] non-local return scalability bottleneck
From: |
D'Alessandro, Luke K |
Subject: |
[Libunwind-devel] non-local return scalability bottleneck |
Date: |
Thu, 3 Dec 2015 12:30:23 +0000 |
Hi All,
I have a C library that commonly uses a custom setjmp/longjmp for non-local
return. I’m trying to add support for intermediate C++ code, which means I need
to return through frames that might have RAII destructors that need to run. I’m
attempting to use `_Unwind_ForcedUnwind()` to perform this operation. It works
fine, however there is a serious scalability bottleneck that I’m trying to
track down.
I’m using the 1.1 release and I’ve switched `x86_64_local_addr_space_init()` to
set the default caching policy to UNW_CACHE_PER_THREAD. I did this statically
because I couldn’t figure out where to insert `unw_set_caching_policy()` to get
it to change properly—it appears that the address space is created inside of
the call to `_Unwind_ForcedUnwind()`…?
In any case, I still see the app hammering away at a lock. I see an init lock
in `tdep_init()`, but I doubt that’s an issue. I also see a lock in
`trace_cache_get_unthreaded`, which I don’t think I should be hitting. If
someone could point me to the likely issue that would be great, or if there is
something fundamentally non-scalable about reading the dwarf information and
unwinding that would be useful information too.
Thanks,
Luke
- [Libunwind-devel] non-local return scalability bottleneck,
D'Alessandro, Luke K <=