[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
bug#17836: 24.3; `describe-fontset' confused about e.g. ?\C-@
From: |
Eli Zaretskii |
Subject: |
bug#17836: 24.3; `describe-fontset' confused about e.g. ?\C-@ |
Date: |
Mon, 23 Jun 2014 19:17:01 +0300 |
> From: Samuel Bronson <naesten@gmail.com>
> Date: Sun, 22 Jun 2014 21:57:07 -0400
>
> Fontset:
> -misc-fixed-medium-r-semicondensed--13-*-*-*-*-*-fontset-xterm.default
> CHAR RANGE (CODE RANGE)
> FONT NAME (REQUESTED and [OPENED])
> C-@ .. � (#x43 .. #x10FFFF)
> -Misc-Fixed-medium-r-semicondensed--13-*-75-75-c-120-ISO10646-1
> --8<---------------cut here---------------end--------------->8---
>
> Notice how #x43 is NOT a representation of `?\C-@' but, in fact, of
> `?C'?
That's because print-fontset-element does this:
(beginning-of-line)
(let ((from (following-char))
IOW, it assumes that there's a single character there, not a
human-readable description of a character, such as "C-@".
How about submitting a patch that uses 'kbd', say?
> Why would you try to extract the codepoints AFTER formatting the
> range as a string ...?
Because the formatting of the codepoints is done by describe-vector,
which doesn't pass the codepoints to print-fontset-element. So it
needs to reverse-engineer the codepoints from the text that was
already inserted into the buffer.