emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Emacs bzr memory footprint


From: Stefan Monnier
Subject: Re: Emacs bzr memory footprint
Date: Fri, 21 Oct 2011 09:30:24 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.0.90 (gnu/linux)

> However, this cannot explain the memory consumption, because I check
> most of these groups out within a few minutes of starting Emacs, and
> memory consumption then is around 300Mb. The rise from then on is
> inexorable, though not steady: where the figures for ten hours ago were

> STIME   RSS    VSZ
> Oct07 832348 1127088
> Oct07 226916 499588

> now they are

> STIME   RSS    VSZ
> Oct07 876524 1170572
> Oct07 227016 499588

I've just installed memory-usage.el as a package in the `elpa' branch
(you'll soon be able to install it using package.el and in the mean time
you can download it from 
http://bzr.savannah.gnu.org/lh/emacs/elpa/download/head:/memoryusage.el-20111021130523-o1v7kfuzrat3pcd9-2/memory-usage.el).

It basically looks at the garbage-collect output and the various
buffer's size to provide you with a human-readable description of the
memory in use, from Elisp's point of view.  Don't expect too much of it,
but it can be helpful.

Could you try it out?
If this output does not explain the process's size you're seeing, then
we have a leak in the C code somewhere.  If it does, then we have either
a leak in Elisp, or at least an excessive memory use by some package,
and hopefully we can at least figure out which category of object
is involved.

> Also note that XEmacs's huge memory usage was accompanied by a radical
> slowdown in GC times that eventually forced a restart if I was to get
> anything done. By contrast, this ballooning is not accompanied by any
> slowdown in GC: a GC still takes only about 1/5s, barely slower than
> when Emacs is freshly started.

A fast GC means that there are fairly few Elisp objects, hence most of
the memory is used either by objects not visible to Elisp, or by things
like large strings or large buffers (since the GC doesn't need to scan
the string text or buffer text).

> sparsely-filled, severely-fragmented heap? If so, perhaps Emacs would
> benefit from a simple pool allocator accessed via a new let/setq form or
> a new arg to create-buffer, so Gnus could arrange to stuff variables it
> knows will be huge, or buffer-local variables of buffers it thinks may
> have lots of huge buffer-local vars, into a newly-mmap()ed region?

Indeed Emacs does not provide any way to tell the allocator to colocate
objects into a particular "pool" so they can be freed together.

> Unfortunately that means, sigh, using our own malloc() again, which is
> probably more painful than useful.

Emacs already uses its own malloc for most objects: it uses malloc
directly to allocate vectors (and this is actually something that we
might want to change, because it (indirectly) comes with an O(log n)
cost), but for strings, conses, markers, overlays, and floats it uses
malloc only to get a chunk of memory which it then manages on its own.
`memory-usage' does give you this kind of info when it says
"5743392+864696 bytes in cons cells" which means 5MB of live cons-cells,
and 800KB of cons-cells that are free (i.e. they are in a memory chunk
that Emacs can't return to malloc because that chunk also contains live
cons cells).

BTW, from the GC's point of view (and memory-usage's as well), "vectors"
include a few non-vector objects such as buffers, processes,
hash-tables, and a few more; and "markers" similarly include a few
non-marker objects, mostly overlays.  Oh, and "intervals" are objects
used to store text-properties.

> I suspect actually proving my contention first would be a good
> idea.  Not sure how to get the addresses of Lisp objects from a running
> Emacs though: gdb, presumably.

I'd hack src/alloc.c to export the needed info to Elisp.  But maybe
memory-usage will already give us enough info that this won't
be necessary.


        Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]