bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 2/2] Use O_IGNORE_CTTY where appropriate


From: Sergey Bugaev
Subject: Re: [PATCH v3 2/2] Use O_IGNORE_CTTY where appropriate
Date: Mon, 5 Jun 2023 14:55:07 +0300

Hello,

On Mon, Jun 5, 2023 at 12:28 PM Florian Weimer <fweimer@redhat.com> wrote:
> > * shm_open, sem_open: These don't work with ttys
> > * opendir: Directories are unlikely to be ttys
>
> I scrolled through the patch, and it seems to me that all these open
> calls could use O_NOCTTY.

Perhaps they could use O_NOCTTY indeed, though that would be a fix /
change for correctness (on non-Hurd) and not a performance
optimization. Would you prefer me to also add O_NOCTTY to all these
calls? Or maybe I should even define O_IGNORE_CTTY to O_NOCTTY
(instead of 0) on non-Hurd?

Please note that O_NOCTTY is 0 on the Hurd, and opening a tty never
makes it our ctty; you can only gain a ctty by explicitly requesting
that (using TCIOSCTTY). An alternative way to think of this: O_NOCTTY
is always in effect on the Hurd.

> Do you still need a flag because the
> performance optimization is not about changing the controlling terminal,
> but about detecting the controlling terminal as such?

Yes, exactly. The point of this patchset is skipping runtime ctty
detection when we statically know for sure that the file being opened
can not be our current ctty. This is not supposed to alter observable
behavior, i.e. code that _can_ reasonably be reopening the current
ctty should still work the same. But code that we know does not reopen
our ctty should work faster, by skipping the check.

Please also see the v1 cover letter [0].

[0]: https://sourceware.org/pipermail/libc-alpha/2023-April/147355.html


It also helps to look with rpctrace at what this changes in practice.
In case you don't readily have access to a Hurd system, I've uploaded
the rpctrace of uname on Debian GNU/Hurd here: [1]. Unfortunately
rpctrace output format is rather messy compared to strace; I've had a
branch that attempted to improve that (along with many other fixes to
rpctrace), but it's stalled/lost, so the current format will have to
do for now.

[1]: https://paste.gg/p/anonymous/68f3b1efa3384bf5807affde8fb83d83

You you grep / Ctrl-F for 'ctty', you can see there:

  15<--28(pid1601)->term_getctty () = 0    4<--34(pid1601)
task13(pid1601)->mach_port_deallocate (pn{ 10}) = 0
  15<--28(pid1601)->term_open_ctty (0 0) = 0x40000016 (Invalid argument)
  15<--28(pid1601)->term_getctty () = 0    4<--34(pid1601)
task13(pid1601)->mach_port_deallocate (pn{ 10}) = 0
  15<--28(pid1601)->term_open_ctty (0 0) = 0x40000016 (Invalid argument)
  15<--28(pid1601)->term_getctty () = 0    4<--34(pid1601)
task13(pid1601)->mach_port_deallocate (pn{ 10}) = 0
  15<--28(pid1601)->term_open_ctty (0 0) = 0x40000016 (Invalid argument)

This is the initial process startup. This is obviously broken -- it
calls term_getctty/term_open_ctty on the same port (/dev/ttyp0 --
here, "15<--28(pid1601)") three times, getting an error each time!

I've fixed the EINVALs in 346b6eab3c14ead0b716d53e2235464b822f48f2
"hurd: Run init_pids () before init_dtable ()". Also since the port
for fds 0, 1, 2 is the same, it makes sense to only do the check once,
and not once per fd -- that I have done in
e55a55acb19400a26db4e7eec6d4649e364bc8d4 "hurd: Avoid extra ctty RPCs
in init_dtable ()".

But then later you find these:

12<--31(pid1601)->dir_lookup
("usr/lib/locale/C.utf8/LC_IDENTIFICATION" 4194305 0) = 0 1 ""
45<--22(pid1601)
45<--22(pid1601)->term_getctty () = 0xfffffed1 ((ipc/mig) bad request
message ID)

12<--31(pid1601)->dir_lookup
("usr/lib/i386-gnu/gconv/gconv-modules.cache" 1 0) = 0 1 ""
45<--48(pid1601)
45<--48(pid1601)->term_getctty () = 0xfffffed1 ((ipc/mig) bad request
message ID)

(and many, many more). Fixing these is exactly what this patchset is
about: we *know* that gconv-modules.cache is not our ctty; no need to
check at runtime.

Hope this makes sense!

Sergey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]