mit-scheme-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Internal read-char #f vs eof-object


From: Taylor R Campbell
Subject: Internal read-char #f vs eof-object
Date: Mon, 4 Sep 2023 23:38:04 +0000

Every now and then in IMAIL I get this error:

The object #f, passed as the first argument to char->integer, is not
the correct type.

 S0  (char->integer #f)
 S1  ;unknown compiled code
 S2  (char-code char)
 S3  (char-in-set? char delimiters)
 S4  ;unknown compiled code
 S5  (read-delimited-string delimiters port)
...

The call to char-in-set? is inside input-port/read-string.  It looks
like generic-io/read-char returns a character or whatever the input
buffer normalizer returns.  What might that be?

The crlf normalizer explicitly returns #f in some cases.  The newline
normalizer (the one used in this case) defers to decode-char, which
returns whatever peek-byte returns if it's not a fixnum.  peek-byte,
in turn, is defined in terms of peek-u8, and returns eof-object in
some states, but #f in the closed state (the state the port is in now,
in this case).

Personally I'm partial to #f because I think the eof object is a
design mistake (read should have just returned two values or similar),
and #f is conveniently represented by the all-zero machine word.

But I don't care which one it uses internally, as long as the logic is
consistent and makes it clear everywhere what the possible return
values are.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]