emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: byte-code optimizations


From: Paul Pogonyshev
Subject: Re: byte-code optimizations
Date: Tue, 21 Sep 2004 22:42:47 -0200
User-agent: KMail/1.4.3

>     However, we (actually Stefan) need to decide whether we want to try
>     optimizing unbinds without visible bindings.  I.e. I'm optimizing
>
>           varbind-X ... unbind-N --> discard ... unbind-(N-1)
>
>     while Stefan suggests a more general
>
>           ... unbind-1 --> unbind-1 ...
>
>     which is then followuped by other (existing) optimizations.
>
> The way to decide this is to look at what the results would be in
> various cases.  Simply doing the unbind-1 earlier is not a significant
> improvement; it does not make the code smaller or faster.  As far as I
> can see, the only way it achieves a benefit is if it leads to
> eliminating the unbind-1.  That would only happen if Stefan's
> optimization moves the unbind-1 back to right after a varbind, right?
> If so is optimization is indirectly equivalent to yours.

After a `varbind' or `unwind-protect' or similar.  However, I doubt
that those make much difference, so Stefan's optimization must be
_nearly_ equivalent to mine.

> I can see one other case where Stefan's optimization might make a
> difference.  If we install your tail-jumping optimization, Stefan's
> optimization might produce more opportunities to do tail-jumping, or
> it might ruin some opportunities to do tail-jumping.  But I think
> that effect will be too small to matter anyway.

In its current form tail-jumping optimization is executed only once,
after all other optimizations are made.  However, it can give better
results if run interchangeably with other optimizations, i.e. it
sometimes gives opportunities to optimize further.  Such an approach
may be too expensive in terms of optimization speed, so I left it
out at least for now.

> Unless I have missed some point here, I think your version of the
> optimization is the better of the two (all else being equal), because
> it gets directly to the desired result instead of requiring it to be
> done in two steps.

It must also be much simpler (though I never saw what Stefan had).

> Some comments on details of your code:
>
>     +(put 't 'binding-is-magic t)
>     +(put 'nil 'binding-is-magic t)
>
> That is unnecessary; you can't bind nil or t,

Removed (I followed suggestions in the comments to the top of the
file).

>     +(put 'debug-on-next-call 'binding-is-magic t)
>
> That is unnecessary, since a function call would never
> be safe to do this optimization over,

Removed.

>     +(defconst byte-compile-side-effect-free-dynamically-safe-ops
>     +  '(;; Same as `byte-compile-side-effect-free-ops' but without
>     +    ;; `byte-varref', `byte-symbol-value' and certain editing
>     +    ;; primitives.
>
> Why exclude byte-symbol-value?

Because value can change as the result of the binding we are
trying to eliminate.

> After you deal with those little points, I'd like your code to be
> installed.

Modified patch is below.

> However, there is still the question of whether we should
> change the standard defsubst to work the way defsubst* does.

Maybe we can even use `defmacro' for `caar' and friends.  Since
they evaluate their lone argument only once, there must not be
any problems, right?

If this (or `defsubst*') works, I'll investigate whether this
byte-code optimization gives any improvement after such a change.

Paul



--- byte-opt.el 22 Mar 2004 13:21:08 -0200      1.75
+++ byte-opt.el 21 Sep 2004 22:41:30 -0200      
@@ -1467,6 +1467,30 @@ of FORM by signalling the error at compi
      byte-member byte-assq byte-quo byte-rem)
    byte-compile-side-effect-and-error-free-ops))
 
+(defconst byte-compile-side-effect-free-dynamically-safe-ops
+  '(;; Same as `byte-compile-side-effect-free-ops' but without
+    ;; `byte-varref', `byte-symbol-value' and certain editing
+    ;; primitives.
+    byte-constant byte-dup byte-symbolp byte-consp byte-stringp byte-listp
+    byte-integerp byte-numberp byte-eq byte-equal byte-not byte-car-safe
+    byte-cdr-safe byte-cons byte-list1 byte-list2 byte-point byte-point-max
+    byte-point-min byte-following-char byte-preceding-char
+    byte-eolp byte-eobp byte-bolp byte-bobp
+    ;;
+    byte-nth byte-memq byte-car byte-cdr byte-length byte-aref
+    byte-get byte-concat2 byte-concat3 byte-sub1 byte-add1
+    byte-eqlsign byte-gtr byte-lss byte-leq byte-geq byte-diff byte-negate
+    byte-plus byte-max byte-min byte-mult byte-char-after
+    byte-string= byte-string< byte-nthcdr byte-elt
+    byte-member byte-assq byte-quo byte-rem))
+
+(put 'debug-on-error 'binding-is-magic t)
+(put 'debug-on-abort 'binding-is-magic t)
+(put 'inhibit-quit 'binding-is-magic t)
+(put 'quit-flag 'binding-is-magic t)
+(put 'gc-cons-threshold 'binding-is-magic t)
+(put 'track-mouse 'binding-is-magic t)
+
 ;; This crock is because of the way DEFVAR_BOOL variables work.
 ;; Consider the code
 ;;
@@ -1871,6 +1895,55 @@ If FOR-EFFECT is non-nil, the return val
                      (setq lap (delq lap0 lap))))
               (setq keep-going t))
              ;;
+             ;; varbind-X [car/cdr/ ...] unbind-1 --> discard [car/cdr/ ...]
+             ;; varbind-X [car/cdr/ ...] unbind-N
+             ;;   --> discard [car/cdr/ ...] unbind-(N-1)
+             ;;
+             ((and (eq 'byte-varbind (car lap1))
+                   (not (get (cadr lap1) 'binding-is-magic)))
+              (setq tmp (cdr rest))
+              (while
+                  (or
+                   (memq (caar (setq tmp (cdr tmp)))
+                         byte-compile-side-effect-free-dynamically-safe-ops)
+                   (and (eq (caar tmp) 'byte-varref)
+                        (not (eq (cadr (car tmp)) (cadr lap1))))))
+              (when (eq 'byte-unbind (caar tmp))
+                ;; Avoid evalling this crap when not logging anyway
+                (when (memq byte-optimize-log '(t lap))
+                  (let ((format-string)
+                        (args))
+                    (if (and (= (aref byte-stack+-info (symbol-value (car 
lap0)))
+                                1)
+                             (memq (car lap0) side-effect-free))
+                        (setq format-string
+                              "  %s %s [car/cdr/ ...] %s\t-->\t[car/cdr/ ...]"
+                              args (list lap0 lap1 (car tmp)))
+                      (setq format-string
+                            "  %s [car/cdr/ ...] %s\t-->\t%s [car/cdr/ ...]"
+                            args (list lap1 (car tmp) (cons 'byte-discard 0))))
+                    (when (> (cdar tmp) 1)
+                      (setq format-string (concat format-string " %s"))
+                      (nconc args (list (cons 'byte-unbind (1- (cdar tmp))))))
+                    (apply 'byte-compile-log-lap-1 format-string args)))
+                ;; Do the real work
+                (if (and (= (aref byte-stack+-info (symbol-value (car lap0)))
+                            1)
+                         (memq (car lap0) side-effect-free))
+                    ;; Optimization: throw const/dup/... varbind right away.
+                    (progn
+                      (setcar rest (nth 2 rest))
+                      (setcdr rest (nthcdr 3 rest)))
+                  (setcar lap1 'byte-discard)
+                  (setcdr lap1 0))
+                (if (= (cdar tmp) 1)
+                    (progn
+                      ;; Throw away unbind-1
+                      (setcar tmp (nth 1 tmp))
+                      (setcdr tmp (nthcdr 2 tmp)))
+                  (setcdr (car tmp) (1- (cdar tmp))))
+                (setq keep-going t)))
+             ;;
              ;; X: varref-Y    ...     varset-Y goto-X  -->
              ;; X: varref-Y Z: ... dup varset-Y goto-Z
              ;; (varset-X goto-BACK, BACK: varref-X --> copy the varref down.)





reply via email to

[Prev in Thread] Current Thread [Next in Thread]