bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#28862: Emacs 25.3.1 segmentation fault on killing *Colors* buffer


From: Levi
Subject: bug#28862: Emacs 25.3.1 segmentation fault on killing *Colors* buffer
Date: Mon, 16 Oct 2017 19:30:33 -0700

Hey, Emacs 26.0.90 seems to have a similar bug when going through the same steps on my system. If other users can't reproduce this, it may also be related to the window manager's behaviour -- although that would still be strange.

With 26.0.90, Emacs always hangs when trying to reproduce the issue, and prints to stderr: 'Fatal error 11: Segmentation fault'. Sending SIGTERM or ^C does not terminate the process, only killing it works.

I tried running emacs through GDB in order to get a backtrace, but couldn't reproduce the crash there (memory corruption issue). Attaching to the process with `gdb -p <pid>` allowed me to obtain this dump:

#0  0x00007f66ca0a038b in __lll_lock_wait_private () at /usr/lib/libc.so.6
#1  0x00007f66ca01c609 in _int_free () at /usr/lib/libc.so.6
#2  0x00007f66cc8a7a43 in xmlCleanupCharEncodingHandlers () at /usr/lib/libxml2.so.2
#3  0x00007f66cc8c6819 in xmlCleanupParser () at /usr/lib/libxml2.so.2
#4  0x00000000005c7a95 in xml_cleanup_parser () at xml.c:266
#5  0x00000000004f5cf2 in shut_down_emacs (sig=sig@entry=11, stuff=stuff@entry=0)
    at emacs.c:2122
#6  0x00000000004f5ec3 in terminate_due_to_signal (sig=sig@entry=11, backtrace_limit=backtrace_limit@entry=40) at emacs.c:377
#7  0x000000000050e3ce in handle_fatal_signal (sig=sig@entry=11) at sysdep.c:1768
#8  0x000000000050e5e8 in deliver_thread_signal (sig=11, handler=0x50e3c0 <handle_fatal_signal>) at sysdep.c:1742
#9  0x000000000050e66c in deliver_fatal_thread_signal (sig=<optimized out>)
    at sysdep.c:1780
#10 0x000000000050e66c in handle_sigsegv (sig=<optimized out>, siginfo=<optimized out>, arg=<optimized out>) at sysdep.c:1865
#11 0x00007f66cac72da0 in <signal handler called> () at /usr/lib/libpthread.so.0
#12 0x00007f66ca01afc3 in malloc_consolidate () at /usr/lib/libc.so.6
#13 0x00007f66ca01df52 in _int_malloc () at /usr/lib/libc.so.6
#14 0x00007f66ca01faf4 in malloc () at /usr/lib/libc.so.6
#15 0x00000000005eb6d5 in hybrid_malloc (size=<optimized out>) at gmalloc.c:1734
#16 0x000000000054faad in lmalloc (size=4096) at alloc.c:1450
#17 0x000000000054faad in xmalloc (size=size@entry=4096) at alloc.c:846
#18 0x0000000000550bb3 in allocate_vector_block () at alloc.c:3063
#19 0x0000000000550bb3 in allocate_vector_from_block (nbytes=1160) at alloc.c:3127
#20 0x0000000000550bb3 in allocate_vectorlike (len=144) at alloc.c:3332
#21 0x000000000055181d in allocate_vectorlike (len=144) at alloc.c:3376
#22 0x000000000055181d in allocate_vector (len=len@entry=144) at alloc.c:3372
#23 0x0000000000551967 in Fmake_vector (length=length@entry=578, init=init@entry=0)
    at alloc.c:3466
#24 0x0000000000575116 in concat (nargs=nargs@entry=1, args=args@entry=0x7fff73dbe7f8, target_type=Lisp_Vectorlike, last_special=last_special@entry=false) at fns.c:648
#25 0x0000000000575260 in Fcopy_sequence (arg=<optimized out>) at fns.c:514
#26 0x00000000004334d2 in update_tool_bar (f=f@entry=0x114dc30 <bss_sbrk_buffer+5520464>, save_match_data=save_match_data@entry=false) at xdisp.c:12272
#27 0x00000000004579d3 in update_tool_bar (save_match_data=false, f=<optimized out>)
    at xdisp.c:12217
#28 0x00000000004579d3 in prepare_menu_bars () at xdisp.c:12054
#29 0x00000000004579d3 in redisplay_internal () at xdisp.c:13907
#30 0x00000000004591d5 in redisplay_preserve_echo_area (from_where=from_where@entry=5) at xdisp.c:14602
#31 0x00000000004fff3a in read_char (commandflag=commandflag@entry=1, map=map@entry=70360643, prev_event=0, used_mouse_menu=used_mouse_menu@entry=0x7fff73dc027b, end_time=end_time@entry=0x0) at keyboard.c:2478
#32 0x0000000000502dac in read_key_sequence (keybuf=keybuf@entry=0x7fff73dc0370, prompt=prompt@entry=0, dont_downcase_last=dont_downcase_last@entry=false, can_return_switch_frame=can_return_switch_frame@entry=true, fix_current_buffer=fix_current_buffer@entry=true, prevent_redisplay=prevent_redisplay@entry=false, bufsize=30)
    at keyboard.c:9147
#33 0x00000000005047e4 in command_loop_1 () at keyboard.c:1368
#34 0x000000000056a8be in internal_condition_case (bfun=bfun@entry=0x5045c0 <command_loop_1>, handlers=handlers@entry=21024, hfun=hfun@entry=0x4fb4d0 <cmd_error>)
    at eval.c:1332
#35 0x00000000004f62a4 in command_loop_2 (ignore=ignore@entry=0) at keyboard.c:1110
#36 0x000000000056a82d in internal_catch (tag=tag@entry=50880, func=func@entry=0x4f6280 <command_loop_2>, arg=arg@entry=0) at eval.c:1097
#37 0x00000000004f623b in command_loop () at keyboard.c:1089
#38 0x00000000004fb0e3 in recursive_edit_1 () at keyboard.c:695
#39 0x00000000004fb3fe in Frecursive_edit () at keyboard.c:766
#40 0x0000000000419ffe in main (argc=<optimized out>, argv=0x7fff73dc0728)
    at emacs.c:1713


It looks like this time around the stack isn't corrupted, and symbol information is there.

Another point is that during one of the runs (albiet with my own configuration, rather than `emacs -Q`) is that I got the process to hang with the following output instead:

*** Error in `./emacs': malloc(): memory corruption (fast): 0x0000000004066390 ***
Fatal error 6: Aborted

Is there a quick workaround I can use to avoid this crash for the time being? I would like to close all visible buffers when a frame is closed/deleted, like how I was initially doing so via 'delete-frame-functions.

Thanks in advance,

-Levi

On Mon, Oct 16, 2017 at 8:10 AM, Eli Zaretskii <eliz@gnu.org> wrote:
> Date: Mon, 16 Oct 2017 10:16:58 +0200
> From: martin rudalics <rudalics@gmx.at>
>
> Now if killing a buffer in ‘delete-frame-functions’ may delete a frame
> because, for example, the buffer is shown in a dedicated window which is
> the only window on that frame, you may run exactly in the scenario
> described above.  I hopefully fixed that for Emacs 26 so if you could
> try the release version ...

FWIW, the recipe doesn't crash for me in Emacs 26.0.90, so I guess
your fix solved at least this case.

Thanks.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]