emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: String encoding in json.c


From: Philipp Stephani
Subject: Re: String encoding in json.c
Date: Tue, 26 Dec 2017 21:42:54 +0000



Eli Zaretskii <address@hidden> schrieb am Sa., 23. Dez. 2017 um 19:19 Uhr:
> From: Philipp Stephani <address@hidden>
> Date: Sat, 23 Dec 2017 17:27:22 +0000
> Cc: address@hidden
>
> - We encode Lisp strings when passing them to Jansson. Jansson only accepts UTF-8 strings and fails (with
> proper error reporting, not crashing) when encountering non-UTF-8 strings. I think encoding can only make a
> difference here for strings that contain sequences of bytes that are themselves valid UTF-8 code unit
> sequences, such as "Ä\xC3\x84". This string is encoded as "\xC3\x84\xC3\x84" using utf-8-unix. (Note how
> this is a case where encoding and decoding are not inverses of each other.) Without encoding, the string
> contents will be \xC3\x84 plus two invalid 5-byte sequences. I think it's not obvious at all which interpretation is
> correct; after all, "Ä\xC3\x84" is not equal to "ÄÄ", but the two strings now result in the same JSON
> representation. This could be at least surprising, and I'd argue that the other behavior (raising an error) would
> be more correct and more obvious.

I think we need to take a step back and decide what would we want to
do with strings which include raw bytes.  If we pass such strings to
Jansson, it will just error out, right?

Yes
 
  If so, then we could do one
of the two:

  . Check up front whether a Lisp string includes raw bytes, and if
    it does, signal an error before even trying to encode it.  I think
    find_charsets_in_text could be instrumental here; alternatively,
    we could scan the string using BYTES_BY_CHAR_HEAD, looking for
    either sequences longer than 4 bytes or 2-byte sequences whose
    leading bytes are C0 or C1 (these are the raw bytes).

  . Or we could encode the string, pass it to Jansson, and let it
    error out; then we could produce our own diagnostics.

That's what we are currently doing.
 

Which one of these do you prefer? 

The third option: don't encode (pass SDATA directly) because we know that valid Unicode sequences are represented as valid UTF-8 strings, and invalid Unicode sequences as invalid UTF-8 strings, and that Jansson behaves correctly in all cases.
Given otherwise equal behavior, I generally prefer the least complex option, and "doing nothing" is simpler than "doing something".
 
Currently, you opted for the 2nd
one.  It is not clear to me that the option you've chosen is better,
since (a) it relies on Jansson,

That's fine, because we only rely on documented and tested behavior. Doing so is generally OK; if we couldn't rely on documented behavior, we couldn't use external libraries (including glibc) at all.
 
and (b) it encodes strings which don't
need to be encoded. 

True, that's why I argue we should remove the encoding step.
 
OTOH, the check I propose in (a) means penalty
for every caller.  But then such penalties never averted you elsewhere
in your code, so I wonder why this case is suddenly so different?

I generally prefer interface clarity and defensive programming, i.e. I don't want to introduce undefined behavior on unexpected user input, and I prefer signaling errors over silently doing something subtly wrong. But here the Jansson library already performs all the checks we need, so we don't need to add equivalent duplicate checks.
 

It is true that if we believe Jansson's detection of invalid UTF-8,
and we assume that raw bytes in their current representation will
forever the only extensions of UTF-8 in Emacs, we could pass the
internal representation to Jansson.  Personally, I'm not sure we
should make such assumptions, but that's me.

I think it's fine to make such assumptions.
- Jansson documents how it handles invalid UTF-8.
- Jansson includes multiple test cases that check for the behavior on encountering invalid UTF-8.
- Emacs itself now also includes multiple test cases for such inputs.
- Jansson gets high scores in the nativejson-benchmark conformance tests (the remaining failures are corner cases involving real numbers, which are arguably not true errors and don't affect string handling).
- We don't need to assume that Emacs's internal encoding stays UTF-8-compatible forever, but we can still rely on it. Given the importance and widespread use of UTF-8, it's unlikely that our internal encoding will have to change to something else within the next couple of years. Even if the need to change the encoding should arise, the existing regression tests should alert us immediately about what needs to change.
Emacs is a relatively monolithic codebase, where it's common for some compilation units to rely on implementation details of other compilation units. That's not super great, but also not a strong reason to artificially restrict ourselves from using global knowledge about fundamental data types such as strings. We expose SDATA and SBYTES in lisp.h, so why can't we say what the bytes at SDATA actually contain?
 

> - We decode UTF-8 strings after receiving them from Jansson. Jansson guarantees to only ever emit
> well-formed UTF-8. Given that for well-formed UTF-8 strings, the UTF-8 representation and the Emacs
> representation are one and the same, we don't need decoding.

Once again: do we really want to rely on external libraries to always
DTRT and be bug-free?

Yes, we need to do that, otherwise we couldn't use external libraries at all.
 
  We don't normally rely on external sources like
that. 

We do so all the time. For example, we rely on malloc(123) actually returning either NULL or a memory block of at least 123 bytes.
 
The cost of decoding is not too high;

It's not extremely high, but significant. Users of JSON serialization such as the Language Server Protocol or YCM regularly encode and decode large JSON objects on every keystroke, so we need JSON functions to be fast. If we can speed them up by *removing* code (and thus complexity), then we should do it.
 
the price users will pay
for Jansson's bugs will be much higher.

We shouldn't add workarounds for bugs just because they could potentially happen in the future. True, bugs are possible in any library, but we might as well hit a bug in malloc, which would be far more disastrous, and we don't proactively attempt to work around theoretical malloc bugs. If and when we encounter a serialization bug in Jansson that would produce invalid UTF-8, I'm more than happy to add workarounds, but not for non-existing bugs.
 

>    And second, encoding keeps the
>  encoding intact precisely because it is not a no-op: raw bytes are
>  held in buffer and string text as special multibyte sequences, not as
>  single bytes, so just copying them to output instead of encoding will
>  produce non-UTF-8 multibyte sequences.
>
> That's the correct behavior, I think. JSON values must be valid Unicode strings, and raw bytes are not.

Neither are the internal representations of raw bytes, so what's your
point here?

The point is that encoding a multibyte string containing a sequence of two raw bytes can produce a valid UTF-8 string, while using the bytes directly cannot.
 

>  >   /* We need to send a valid UTF-8 string.  We could encode `object'
>  >      but by not encoding it, we guarantee it's valid utf-8, even if
>  >      it contains eight-bit-bytes.  Of course, you can still send
>  >      manually-crafted junk by passing a unibyte string.  */
>
>  If gnutls.c and dbusbind.c don't encode and decode text that comes
>  from and goes to outside, then they are buggy.
>
> Not necessarily. As mentioned, the internal encoding of multibyte strings is even mentioned in the Lisp
> reference; and the above comment indicates that it's OK to use that information at least within the Emacs
> codebase.

I think that comment is based on a mistake, or maybe I don't really
understand it.  Internal representation is not in general valid UTF-8,
that's for sure.

Agreed, the comment should at least be reworded, e.g. "If OBJECT is a well-formed Unicode scalar value sequence, the unencoded bytes range is a valid UTF-8 string, so we don't need to encode it. If OBJECT is not well-formed or unibyte, the function will return EINVAL instead of exhibiting undefined behavior."
 

And the fact that the internal representation is documented doesn't
mean we can draw the conclusions like that. 

Why? Clearly we can make use of documented information?
 
For starters, the
documentation doesn't tell all the story: the 2-byte representation of
raw bytes is not described there.

What's the 2-byte representation?
 

> Some parts are definitely encoded, but for example, there is c_hostname in Fgnutls_boot, which doesn't
> encode the user-supplied string.

That's a bug.

Maybe, maybe not. gnutls_server_name_set explicitly documents that the hostname is interpreted as UTF-8 (presumably even on Windows), so if we can rely on the UTF-8-ness of strings not encoding it is OK.
 

>  Well, I disagree with that conclusion.  Just look at all the calls to
>  decode_coding_*, encode_coding_*, DECODE_SYSTEM, ENCODE_SYSTEM, etc.,
>  and you will see where we do that.
>
> We obviously do *some* encoding/decoding. But when interacting with third-party libraries, we seem to leave
> it out pretty frequently, if those libraries use UTF-8 as well.

Most if not all of those places are just bugs.  People who work mostly
on GNU/Linux tend to forget that not everything is UTF-8.

Definitely true for files and processes, but if an API (such as GnuTLS or Jansson) explicitly documents that it expects UTF-8, then we should be able to rely on that. 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]