bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#15294: 24.3.50; js2-mode parser is several times slower in lexical-b


From: Stefan Monnier
Subject: bug#15294: 24.3.50; js2-mode parser is several times slower in lexical-binding mode
Date: Sat, 14 Sep 2013 10:27:17 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3.50 (gnu/linux)

>> It seems the slowdown is indeed linked to the way `catch' is handled
>> (indeed, this non-idiomatic ELisp code ends up byte-compiled in a really
>> poor way).
> What's non-idiomatic about this use of `catch'?

The non-idiomatic part is the "one big let on top, with lots of setq
inside".  It's clearly C code in Elisp syntax.

> It does not make much of a difference in the interpreted mode.

The interpreted performance is affected by completely different factors.
My guess for the interpreted case is that there are simply "too many"
local variables: the environment is represented by a simple alist, so
variable lookup time is proportional to the number of local variables.
That fine when there are 5 local variables, but is inefficient when you
have 100 (better would be a balanced tree or maybe a hash table).
This said, I'm not terribly concerned about it: if you need it to go
fast, you should byte-compile the code.  And I hope we will be able to
get rid of the interpreter in some distantish future.

> Now that we have eager macro-expansion, I was rather happy that interpreted
> js2-mode performance is only like 2x worse than when compiled.

Eager macro-expansion indeed speeds up interpreted code, even though the
intention was rather to get one-step closer to the elimination
of interpretation.

> But 2.6 vs 2.1, it still a noticeable regression. Do you suppose the usage
> of `setq' is the main contributor?

The problem goes as follows:

1- Because of how the `catch' byte-code works, for a (catch TAG BODY)
   where BODY refers to some surrounding lexical variables LVARS, the
   byte-compiler needs to turn the code into something similar to:

   (let ((body-fun (make-closure LVARS () BODY)))
     (catch TAG (funcall body-fun)))

2- When a lexical variable is both
   a- caught in a closure
   b- not immutable
   the byte-compiler can't store this variable in the bytecode stack
   (since the closure can't refer to the bytecode stack directly, but
   instead stores *copies* of the elements it needs), so it needs to
   change code like

   (let ((lvar VAL1))
     ...
     (setq lvar VAL2)
     ...(lambda () ..lvar..)...)

   into
     
   (let ((lvar (list VAL1)))
     ...
     (setcar lvar VAL2)
     ...(lambda () ..(car lvar)..)...)

So if you look at js2-get-token, you'll see that the code does not
directly use any closure, but the use of `catch' ends up putting most of
the body into various closures.  And since all variables are declared
outside of the catch but used inside, and they're all modified by
`setq', they all end up converted as above, so that every use of such
a variable turns into "get the cons cell from the environment, then
apply car to it".

By moving the let inside the catch, some of those variables end up not
being caught by a closure any more, so they don't need to be converted
to cons cells, hence the reduction from 5s down to 2.6s.

> (*) Would you take a look at it, too? It has quite a few changes in
> js2-get-token' and related functions.

> They also make performing the same change as in your patch more
> difficult, since I'm actually using the value returned by `catch'
> before returning from the function.

That's not a problem.  The rule to follow is simply: sink the `let'
bindings closer to their use.  You don't need to `let' bind all those
vars together in one big `let': you can split this let into various
`let's which you can then move deeper into the code.  In some cases
you'll find that some of those vars don't even need to be `setq'd any
more.

Note that such a "scope-reduction" can also be done in C and in many
cases it's also a good idea to do it in C, tho the impact on performance
is much less significant because C doesn't have closures.

>> the mere existence of a single `setq' on a variable can sometimes
>> slow other chunks of code: in many cases `let' is cheaper than `setq').
> I see.  Does this also extend to `setf' and its defstruct-related
> functionality?

It has to do specifically with `setq' (i.e. modification of plain
variables): when `setf' expands to `setq', `setf' is impacted,
otherwise no.


        Stefan





reply via email to

[Prev in Thread] Current Thread [Next in Thread]