[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: character encoding confusion
From: |
Pascal J. Bourguignon |
Subject: |
Re: character encoding confusion |
Date: |
Wed, 08 Dec 2010 15:17:49 -0000 |
User-agent: |
Gnus/5.101 (Gnus v5.10.10) Emacs/23.2 (gnu/linux) |
patrol <patrol_boat@hotmail.com> writes:
> I created a program in C that requires the degree symbol. The mode
> line indicates that Emacs is using the Latin-1 character encoding.
> According to Latin-1 encoding tables, the degree symbol is encoded as
> decimal 176, so that's what I used in my code. But when the character
> printed, it wasn't the degree symbol; it was a "shaded box" looking
> thing. Then I looked at an ASCII table here (http://
> www.asciitable.com/), and it says that 176 is indeed the shaded box
176 is not an ASCII code. ASCII contains only codes from 0 to 127.
> that was printed in my program, and the degree character was decimal
> 248. So I used 248 in my code, and I got the degree symbol I wanted.
>
> But all this leaves me with the question that if Emacs was supposedly
> encoding the file in Latin-1, why doesn't the code for the degree
> symbol match up with the Latin-1 table? Why does it instead match up
> with some non-standard "extended" ASCII that I just happened to come
> across.
>
> Can anyone shed light on this?
Remember that C only deals with integer. There is no character type in C.
So, what happens when you call: printf("%c",176); ?
Have a look at setlocale, LC_ALL, etc, and libiconv.
--
__Pascal Bourguignon__ http://www.informatimago.com/