help-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Passing buffers to function in elisp


From: Petteri Hintsanen
Subject: Re: Passing buffers to function in elisp
Date: Wed, 06 Sep 2023 22:05:13 +0300
User-agent: Gnus/5.13 (Gnus v5.13)

Hello all,

It took some time to do these memory optimizations I asked about few
months ago.  Here are some remarks.


>> Also, if I interpreted profiler's hieroglyphs correctly, it told me that
>> this setq
>>
>>   (setq stream (vconcat stream (plist-get page :stream)))
>
> This is a typical a source of unnecessary O(N²) complexity: the above
> line takes O(N) time, so if you do it O(N) times, you got your
> N² blowup.  You're usually better off doing
>
>     (push (plist-get page :stream) stream-chunks)
>
> and then at the end get the `stream` with
>
>     (mapconcat #'identity (nreverse stream-chunks) nil)
> or
>     (apply #'vconcat (nreverse stream-chunks))

I replaced vconcat with push.  However it did not have a significant
effect (measured with Emacs memory profiler).  Perhaps the chunks were
quite small after all.  In complexity speak, with small N one usually
does not need to worry about quadratics.

But it is no worse either, so I left it that way.

>> I think I can replace vectors with strings, which should, according to
>> the elisp manual, "occupy one-fourth the space of a vector of the same
>> elements."
>
> More likely one-eighth nowadays (64 bit machines).

Changing vectors to strings did indeed have a significant effect.  It is
also the right thing to do, because, frankly, much of the data *are*
strings.

>> Similarly bindat consumes a lot of memory.
>
> Hmm... IIRC it should not use up very much "auxiliary" memory.  IOW
> its memory usage should be determined by the amount of data it
> returns.  So, when producing the bytestring it should be quite
> efficient memorywise.

This is correct.  Bindat is very conservative.  I probably misread the
profiler report back then and unjustly put part of the blame on bindat.

>>> That's definitely something to consider.  Another is whether the ELisp
>>> code was byte-compiled (if not, then all bets are off, the interpreter
>>> itself generates a fair bit of garbage, especially if you use a lot of
>>> macros).
>> No, it was not byte-compiled.
>
> Then stop right there and fix this problem.  There's absolutely no point
> worrying about performance (including memory use) if the code is
> not compiled because compilation can change the behavior drastically.

This is also absolutely correct.  There is no point in profiling non
compiled code.  Non compiled code gives wildly changing profiles from
time to time.

>> I'll try byte-compiling after the code is in good enough shape to do
>> controlled experiments.
>
> The compiler is your friend.  He can help you get the code in good shape :-)

Truly he does.

I have also native compilation enabled.  Don't know how much effect it
had.




I also tried to replace with-temp-buffer forms (such forms are called
hundreds of times) with a static buffer for holding temporary data.  It
produced mixed results.  In some limited settings, memory savings were
considerable, but in some others cases it blew up memory usage.  I
cannot explain why that happened.  But it seems safest to stick to
with-temp-buffer.


Nonetheless, the code is now much better. 
Thank you all for your insights,
Petteri



reply via email to

[Prev in Thread] Current Thread [Next in Thread]