emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Proposal: block-based vector allocator


From: Stefan Monnier
Subject: Re: Proposal: block-based vector allocator
Date: Wed, 06 Jun 2012 15:18:32 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1.50 (gnu/linux)

>> I explained this earlier: using a vector-block for largish vectors is
>> not efficient (because the overhead of the vector-block is not shared
>> among enough vectors).
>> E.g. for a vector of size VECTOR_BLOCK_BYTES, using the vector-block
>> code is a complete waste

> ...of just one pointer, so 8/4088, or 0.2% in terms of space for this rare
> case; an overhead of having one more mem_node is 6x larger.

No, no, no: since this VECTOR_BLOCK_BYTES will fill the whole
vector-block it will incur the full cost of a mem_mode anyway when
allocating the vector-block.

>> For the case of a vector of size VECTOR_BLOCK_BYTES, allocating in
>> a vector block will always be a bad idea, no matter the scenario.
> Allocating a lot of VECTOR_BLOCK_BYTES / 2 + sizeof (Lisp_Object)
> vectors (and negligible amount of others) will waste ~50% of space in
> blocks;

I'm not too worried about this, actually, what I'm after is different:

The cost (CPU or memory, it doesn't matter much here, and we ignore
fragmentation) of allocating a vector is something like:
and CPU costs alike, ignores fragmentation):
- outside of a vector-block: malloc + mem_node
- inside a vector-block: vector-block + frac * (malloc + mem_node + a-bit-more)
  where "frac" is a fraction that's more or less vector-size /
  VECTOR_BLOCK_BYTES.
So for small vectors, "frac" is small and since we expect the
vector-block overhead to be significantly smaller than malloc+mem_node
we win.  But past a certain value of "frac", we're better off allocating
outside of a vector-block.  As I said, I don't know exactly where that
threshold is, but using VECTOR_BLOCK_BYTES / 2 should be good enough.

And it also happens to help with the problem you mention, reducing the
worst-case 50% (which really means doubling the memory use) to
a worst-case 25% (which only means an extra 33% of memory use, which
I find a lot more acceptable) lost in fragmentation.


        Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]