--- Begin Message ---
Subject: |
madvise bug in guile-git |
Date: |
Sat, 24 Jan 2015 16:52:07 -0600 |
Hi,
I've got a test where hit guile with maybe a dozen threads doing
assorted things. Doing this, I then see handful of these messages:
"madvise failed: Cannot allocate memory" every 5 or 10 seconds.
Except for the printing, everything seems stable. I've aptured a
stack trace for this, below, by pulling from today's git (24 January)
and adding an abort right after the print.
The stack trace is not very enlightening (to me). I figure that
either madvise is freeing something previously freed 9??) or that
there is either a glibc or a kernel bug.
--linas
Details below;
diff --git a/libguile/vm.c b/libguile/vm.c
index 0e59835..4911897 100644
--- a/libguile/vm.c
+++ b/libguile/vm.c
@@ -917,8 +917,17 @@ return_unused_stack_to_os (struct scm_vm *vp)
ret = madvise ((void *) start, end - start, MADV_DONTNEED);
while (ret && errno == -EAGAIN);
- if (ret)
+ if (ret) {
perror ("madvise failed");
+fprintf(stderr, "duuude start=%lx end=%lx sz=%lu\n", start, end, end -
+start);
+
+
+abort();
+
+
+
+}
}
vp->sp_max_since_gc = vp->sp;
All this is on Linux Mint "Rebecca" which is a modified version of
Ubuntu 14.04
Otherer system info:
$ cat /proc/version
Linux version 3.13.0-43-generic (address@hidden) (gcc version 4.8.2
(Ubuntu 4.8.2-19ubuntu1) ) #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014
$ gcc --version
gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2
$ dpkg -S /lib/x86_64-linux-gnu/libc-2.19.so
libc6:amd64: /lib/x86_64-linux-gnu/libc-2.19.so
madvise failed: Cannot allocate memory
duuude start=7f8653535000 end=7f8653536000 sz=4096
[2015-01-24 20:00:34:900] [ERROR] Caught signal 6 (Aborted) on thread
140213737207552
Stack Trace:
2: basic_string.h:539 ~basic_string()
3: CogServerMain.cc:78 _Z7sighandi()
4: ??:0 killpg()
5: raise.c:56 __GI_raise()
6: abort.c:91 __GI_abort()
7: vm.c:926 return_unused_stack_to_os()
8: vm.c:1024 scm_i_vm_mark_stack()
9: ??:0 GC_mark_from()
10: ??:0 GC_mark_some()
11: ??:0 GC_stopped_mark()
12: ??:0 GC_try_to_collect_inner()
13: ??:0 GC_collect_or_expand()
14: ??:0 GC_allocobj()
15: ??:0 GC_generic_malloc_inner()
16: ??:0 GC_generic_malloc()
17: ??:0 GC_core_malloc()
18: gc.h:229 scm_double_cell()
19: throw.c:364 scm_c_catch()
20: SchemeEval.cc:527 opencog::SchemeEval::do_eval(std::string const&)
21: SchemeEval.cc:470 opencog::SchemeEval::c_wrap_eval(void*)
22: continuations.c:426 c_body()
23: vm-engine.c:809 vm_regular_engine()
24: vm.c:1269 scm_call_n()
25: throw.c:138 catch()
26: continuations.c:371 scm_i_with_continuation_barrier()
27: continuations.c:465 scm_c_with_continuation_barrier()
28: threads.c:789 with_guile_and_parent()
29: ??:0 GC_call_with_stack_base()
30: threads.c:837 scm_with_guile()
31: SchemeEval.cc:438
opencog::SchemeEval::eval_expr(std::string const&)
32: basic_string.h:293 std::string::_M_data() const
33: ??:0
std::this_thread::__sleep_for(std::chrono::duration<long,
std::ratio<1l, 1l> >, std::chrono::duration<long, std::ratio<1l,
1000000000l> >)
34: pthread_create.c:312 start_thread()
35: clone.S:113 clone()
--- End Message ---
--- Begin Message ---
Subject: |
Re: bug#19676: madvise bug in guile-git |
Date: |
Tue, 21 Jun 2016 09:52:10 +0200 |
User-agent: |
Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) |
On Tue 21 Jun 2016 01:36, Linas Vepstas <address@hidden> writes:
> On Mon, Jun 20, 2016 at 10:34 AM, Andy Wingo <address@hidden> wrote:
>
> On Sat 24 Jan 2015 23:52, Linas Vepstas <address@hidden>
> writes:
>
> > I've got a test where hit guile with maybe a dozen threads doing
> > assorted things. Doing this, I then see handful of these
> messages:
> > "madvise failed: Cannot allocate memory" every 5 or 10 seconds.
>
> Do you still get this with Guile 2.1.3?
>
> Building current git now; setup to reproduce this issue will be
> difficult. --linas
If that's the case let's go ahead and close this one. I seem to recall
a situation in which this could occur that I fixed, and I haven't see
such an error since. I'm sure that you'll notice if Guile starts
spewing over the console and open a new bug :)
Andy
--- End Message ---