emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Emacs 23 character code space


From: Stephen J. Turnbull
Subject: Re: Emacs 23 character code space
Date: Thu, 27 Nov 2008 10:10:35 +0900

Handa-san, konnichiwa.  Chotto shitsurei shimasu.  (Hi, Mr. Handa.
I'll be a little rude here in hope of saving you some time. :-)

Eli Zaretskii writes:
 > > From: Kenichi Handa <address@hidden>
 > > CC: address@hidden, address@hidden
 > > Date: Wed, 26 Nov 2008 13:58:26 +0900
 > > 
 > > I'll explain it a little bit more.  To decode a character
 > > sequence to a byte sequence, Emacs actually does two kinds
 > > of decoding as below:
 > > 
 > >              (1)                                (2)
 > > characters <-----> (charset code-point) pairs <-----> bytes
 > 
 > Can you give a couple of examples, for some popular charsets, and how
 > we decode bytes into characters thru these pairs of charsets and code
 > points?

(As you point out later, this is normally denoted 'encoding' in Mule
functions like `encode-coding-region'.  Encoding is actually the
harder case to explain because the sequence of bytes produced for a
given character is context-dependent, so let's look at the bytes -->
characters conversion.)

Let's begin at the beginning, with the number 1 represented by the
Japanese ideograph ichi in the iso-2022-jp coding system, in an HTML
header element alone on a line.  As octets viewed as ASCII[1] it looks
like this in a file

< H 1 > ESC $ B 0 l ESC ( B < / H 1 > ^J

where spaces are represented by SPC (ie, there are no significant
spaces on the line).

(0) We decide to decode using the ISO-2022-JP, which allows multiple
    charsets (at least ASCII, Japanese JIS X 0208 and JIS X 0212, and
    explicitly extended to other registered charsets as ISO-2022-INT).
(1) We initialize current charset to ascii as mandated by the RFC
    defining ISO-2022-JP.
(2) We collect the octet < and translate it to a pair (which is
    trivial -- apply the identity to the octet, and pair it with
    'ascii) giving `(ascii <)'
(3) We convert to a Mule character (which is an integer) with
    "(csetidx(ascii) << 7) + ?<".
(4) We repeat (2) and (3) for H, 1, and >.
(5) We see the control sequence ESC $ B, and switch our current
    charset to japanese-jisx0208.
(6) JIS is a dimension 2 charset, so we collect two octets 0 and l,
    and form the pair (japanese-jisx0208 (0 l)).
(7) We convert to a Mule character (which is an integer) with
    "(csetidx(japanese-jisx0208) << 14) + ?0 << 7 + ?l".
(8) We see the control sequence ESC ( B, and switch our current
    charset to ascii.
(9) We repeat (2) and (3) for <, /, H, 1, >, and ^J.
(10) We are now done with the line, and we are ready to read more
     ASCII *or an escape sequence*.

So the full process (1) -- (10) corresponds to the coding system
ISO-2022-JP, while the process (2) -- (3) corresponds to the charset
ascii, and the process (6) -- (7) to the charset japanese-jisx0208.

 > Thanks.  What confuses me is that, roughly, there's a charset in Emacs
 > 23 for every coding-system, and they both have almost identical names.

No, not even close.  It happens that people who use ISO 8859 coded
character sets generally only see such coding systems (ie, unibyte
uniscript), so you would get that impression, as the coding systems
for those encodings are trivial.

However, for Unicode (as Juanma points out) the charset is always
Unicode (I don't know how that is spelled in Emacs, and XEmacs
technically doesn't have such a charset currently), but various coding
systems such as UTF-8, UTF-16, and UTF-32, in big- and little-endian
are used to decode bytes to code points of Unicode.  For the East
Asian coding systems and legacy ISO 2022 multiscript texts (ISO 2022
can handle not only Japanese, but also mixtures of, say, the Russian
ISO-8859-5 and Hebrew ISO-8859-8 repertoires), it's even more
complicated.

Note that because of the need to handle ASCII in programming, all of
the Asian coding systems must be multiscript, handling at least ASCII
and the national character set(s).

 > For example, the code point of a-umlaut in the iso-8859-1 charset is
 > exactly identical to the byte value produced by encoding that
 > character with iso-8859-1 coding-system.  So I wonder why we need
 > both in Emacs.  Why can't we, for example, decode bytes directly into
 > Emacs characters?

Because each byte is overloaded.  In the process above with the escape
sequences, we decode to a Mule string "<H1>[ichi]</H1>\n", while
without the escape sequences we would get "<H1>0l</H1>\n", also known
as "mojibake" (with a little bit of poetic license, the Japanese for
"monster characters").

Footnotes: 
[1]  This kind of ASCII-compatibility is deliberate in ISO 2022.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]