bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#20154: 25.0.50; json-encode-string is too slow for large strings


From: Dmitry Gutov
Subject: bug#20154: 25.0.50; json-encode-string is too slow for large strings
Date: Sat, 21 Mar 2015 00:02:51 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:36.0) Gecko/20100101 Thunderbird/36.0

On 03/20/2015 11:14 PM, Eli Zaretskii wrote:

Making the string 10 times longer increases the runtime by ~5 here (0.1
-> 0.5). Another 10x increase in length makes it run 4.3 seconds.

So maybe writing this in C is the way to go.

Maybe implementing `json-encode-string` itself in C isn't strictly necessary, or even particularly advantageous.

How about trying to optimize `replace-match' or `replace-regexp-in-string' (which are the main two approaches we can use to implement `json-encode-string') for the case of large input?

Take this example:

(setq s1 (apply #'concat (cl-loop for i from 1 to 30000
                                  collect "123456789\n"))
      s2 (apply #'concat (cl-loop for i from 1 to 15000
                                  collect "1234567890123456789\n")))

On my machine,

(replace-regexp-in-string "\n" "z" s1 t t)

takes ~0.13s, while

(replace-regexp-in-string "\n" "z" s2 t t)

clocks at ~0.08-0.10.

Which is, again, pretty slow by modern standards.

(And I've only now realized that the above function is implemented in Lisp; all the more reason to move it to C).

Replacing "z" with #'identity (so now we include a function call overhead) increases the averages to 0.15s and 0.10s respectively.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]