emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Emacs-diffs] emacs-25 3eb93c0: Rely on conservative stack scanning


From: Daniel Colascione
Subject: Re: [Emacs-diffs] emacs-25 3eb93c0: Rely on conservative stack scanning to find "emacs_value"s
Date: Fri, 1 Apr 2016 12:15:04 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0

On 04/01/2016 12:05 PM, Stefan Monnier wrote:
>>>> That's cheap: you can do it with linear allocation out of an array.
>>>> Why would that be expensive?
>>> That's very expensive compared to doing nothing.
>>> It means that you have to allocate a new array, 
>> Once.
> 
> No, what I describe is what happens every single time you go from one
> side of the fence to the other (i.e. when Elisp calls to an external
> module, as well as every time that external module does a funcall to an
> Elisp function).

I meant that we don't need to allocate a new array on every call. We can
keep it around thread-locally and reuse it.

>>> loop through the old one calling your "cheap allocation" function on
>>> each element, instead of just passing the array pointer untouched.
>> It's a pointer comparison of something that will be in L1 cache anyway.
> 
> What pointer comparison?  I'm talking about an allocation of an
> array (one element per argument of the function being called) plus
> a loop through this array.
> 
>> Of course it's slower than doing nothing. But you have not demonstrated
>> that it is meaningfully slower,
> 
> We're talking about the building blocks of a language construct.

My point stands. I can't think of a real-world application of modules
where the extra cycles matter. I'd much making modules truly independent
of the Emacs internal ABI. What specific use cases do you imagining this
scheme hurting?

> There's no reason why a funcall from a module to an Elisp function (or
> vice versa) should be significantly slower than if it were implemented
> "in core".

It's going to be slower no matter what due to the indirection through a
function pointer.

> Currently we're still pretty far from this ideal, because of the
> signal-catching (which additionally forces us to allocate+fill+pass
> a whole new "struct emacs_env_25" every time, instead of passing it once
> and for all when opening the module).

Are we? If so, that's a bug. We be reusing this structure. We shouldn't
incur a penalty any more severe than setjmp, which in my benchmarks is
very fast.

> Using "Lisp_Object = emacs_value" lets us get a bit closer to
> this ideal, tho.
> 
>> meanwhile, you're ignoring the compatibility benefits and consigning
>> everyone to stack scanning forever.
> 
> I don't foresee any disappearance of stack-scanning in the next ten
> years.

Emacs may end up moving to another VM, and that VM may very well do
precise GC.

There's a cautionary tale from Python here: PyPy is much more efficient
implementation of the language than the traditional CPython interpreter
is, but PyPy gets very little use, mostly because it doesn't interact
well with CPython extension modules, which have CPython implementation
details baked in. If Python had used a scheme more like the one Philipp
and I favor, using PyPy would be a much smaller hurdle.

> And any such disappearance if it ever happens will have much
> further reaching consequences on Emacs's C code, so we'll be *thrilled*
> to break backward compatibility for external modules at that point.

Why should modules have to break?

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]