[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Tokenizing
From: |
Vladimir Kazanov |
Subject: |
Re: Tokenizing |
Date: |
Sat, 20 Sep 2014 19:40:53 +0300 |
> Tokenizing the whole buffer after any change is easily fast enough (on
> modern hardware), even on a 7000 line buffer. Semantic parsing gets a
> lot slower.
>
This is what I do right now in my prototype of a smarter Python mode.
The tokenizing process itself is usually fast enough. But parsing is
more complicated, and may take some time to rebuild the parse tree.
Incremental approach is a natural step here.
--
Yours sincerely,
Vladimir Kazanov
--
С уважением,
Владимир Казанов
Re: Overlay mechanic improvements, Richard Stallman, 2014/09/19
- Re: Overlay mechanic improvements, Vladimir Kazanov, 2014/09/20
- Tokenizing, Richard Stallman, 2014/09/20
- Re: Tokenizing, Stephen Leake, 2014/09/20
- Re: Tokenizing,
Vladimir Kazanov <=
- Re: Tokenizing, Eric Ludlam, 2014/09/20
- Re: Tokenizing, Vladimir Kazanov, 2014/09/20
- Re: parsing (was tokenizing), Stephen Leake, 2014/09/21
Re: Tokenizing, Vladimir Kazanov, 2014/09/20
Re: Tokenizing, Stefan Monnier, 2014/09/20
Re: Tokenizing, Stephen Leake, 2014/09/21
Re: Tokenizing, Stefan Monnier, 2014/09/21
Re: Tokenizing, Stephen Leake, 2014/09/22
Re: Tokenizing, Richard Stallman, 2014/09/21
Re: Tokenizing, Vladimir Kazanov, 2014/09/21