Hello,
I'm investigating a native memory leak in my company's Java web application. We are running a 32-bit JVM, version java-1.8.0_112.i586, on 64-bit CentOS 6.5 servers.
I was hoping to use jemalloc to do profiling of the memory usage to identify the leak. I am trying to use libunwind (1.1) with jemalloc, but invariably upon starting up the application I get a segfault in the same function (access_mem) of libunwind:
Stack: [0x61761000,0x617b2000], sp=0x617ad740, free space=305k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [libunwind.so.8+0x2d78] access_mem+0x38
C [libunwind.so.8+0x20e2] _ULx86_is_signal_frame+0x52
C [libunwind.so.8+0x3668] _ULx86_step+0x68
C [libunwind.so.8+0x221f] backtrace+0x9f
C [libjemalloc-matt.so+0x2e74c] je_prof_backtrace+0x2c
C [libjemalloc-matt.so+0x8dfc] malloc+0x37c
C [libnet.so+0xf14d] Java_java_net_SocketInputStream_socketRead0+0xed
I have several hs_err_pid files (error logs from the JVM when it crashes), and several accompanying core dumps. They always point at line 150 of x86/Ginit.c:
(gdb) l x86/Ginit.c:150
145 {
146 /* validate address */
147 const struct cursor *c = (const struct cursor *)arg;
148 if (c && c->validate && validate_mem(addr))
149 return -1;
150 *val = *(unw_word_t *) addr;
151 Debug (16, "mem[%x] -> %x\n", addr, *val);
152 }
153 return 0;
154 }
Both jemalloc and libunwind were built from source on a 32-bit CentOS 6.5 virtualbox VM. They are running on a 64-bit CentOS 6.5 virtualbox VM.
Am I doing something wrong?
Thanks in advance!
Matt