[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Comments on the hurd-on-l4 document
From: |
Niels Möller |
Subject: |
Comments on the hurd-on-l4 document |
Date: |
07 Jun 2005 18:54:51 +0200 |
User-agent: |
Gnus/5.09 (Gnus v5.9.0) Emacs/21.2 |
I've reread the current version of doc/hurd-on-l4, and I'd like to
post my comments and questions before I forget them again.
Section 3.3: Notifications
I think all notifications can be associated with a server state,
where notifications are sent whenever the state changes. If we focus
on death notifications, the corresponding state is the set of dead
objects that the client owns references to.
>From this point of view, robustness can be guaranteed if the client
also polls the server state occasionally, although any tricks that
let's the client know for sure that no polling is needed during long
periods of inactivity would be nice to have. One wouldn't want to page
in all clients and servers in the system once every minute just to let
them assure each other that nothing's happened.
If a client uses one thread per server it wants notifications from,
it's fairly simple:
It reduces to a loop over the function "tell me what's happened since
last time" which blocks until something actually happens, or returns
immediately if there are any changes the client hasn't heard about
before. On the client side, it is sent with infinite read and write
timeout, on the server side, it's a recv operation that just records
that the client wants to be kept informed, and a send operation that
takes place when anything interesting happens. If something happens at
the server side when the client isn't listening, the server sets a
flag, and reports this change next time the client asks (and then the
client may need further exchanges with the server to sort things out).
So then it doesn't look like notifications at all.
It's more challenging to maintain syncronization with a single client
thread receiving notifications from multiple servers. One would need
one thread to go into the receive phase, and after that (this "after"
seems to be the tricky part; how can one know?) have a second thread
iterate over the servers asking them if anything's changed already. If
not, the client can safely go to sleep; when the first notification
arrives, it will either receive it right away, or, if it has been
paged out, it will receive a page fault.
Section 5.9.1: The pager
The code for the pager should typically be shared between multiple
tasks, so that it's not a big cost to keep it wired in memory. For the
pager data, I'm not sure if it really makes sense to page it out;
you'd typically page it out only when there's high memory pressure,
which means that the process is likely paging quite a lot, which means
that pager data is needed. In this situation, perhaps it would make
more sense to use swapping (in the old sense, i.e. swapping out the
complete task) than paging.
For the LRU pager that will be used by default for posix processes,
how will it get the memory usage statistics? Can it get it directly
from L4, or will it ask the physmem server? (As I have understood it,
a typical MMU generates not only pagefaults, but it also maintains one
or a few flags per mapped page that are updated automatically whenever
the pages is accessed, and this information is what's used to get
approximate information about which of the mapped pages were used
least recently).
Section 6.2.1: Signals
The document says (my numbering)
1. "Also, the proc server can not even hold task info caps to support
the sender of a signal in bootstrapping the connection.
2. "This means that there is a race between looking up the signal
thread ID from the PID in the proc server and acquiring a task
info cap for the task ID of the signal receiver in the sender."
3. "However, in Unix, there is always a race when sending a signal
using kill. The task server helps the users a bit here by not
reusing task IDs as long as possible."
I think 3 is plain wrong. When you write kill 4711 at the shell,
there's a race condition, but that's far from the only way to use
signals. When using kill to a child process, there's no race
condition, since one is guaranteed that the pid isn't reused until the
parent process has called one of the wait system calls.
At least this use case must be supported without any races.
As for 1., I don't think this is correct. For a moment forget we're
talking about the task server. Say we're having the proc server P, a
trusted server S, and a client C (the one that wants to send a
signal). P owns a capability x served by S. Now, C trusts P, and P and
C trusts S, and that's sufficient for using the ordinary handle
transfer protocol to copy the handle (S, x) from P to C. Right?
Now, the situation is a little different; x is a task info cap, and S
is not a general server, but the task server. But this situation ought
to be *simpler* than the situation with a general server. One can
solve it either by a general task-handle transfer protocol with the
task server, or by some simpler ad-hoc exchange between P and C.
As a side note, I think it might make sense to introduce some more
general "process-handle" capability which guarantees that the
corresponding pid isn't reused.
And there's one more issue I'd like to mention, and that is the
interaction with exec. I suspect there is a genuine race here; if you
send a signal to a posix process at about the same time as it calls
exec, I don't think there are any guarantees that the signal will be
handled in a meaningful way (or is it possible to block the signal
before exec, and unblock it after installing a new signal handler
after exec? Then one could guarantee delivery, and one needs to
transfer the set of pending signals from the old task to the new one).
And even if we can't guarantee delivery of the signal, we need to
provide some way for proc clients to deal with the fact that the
signal thread for a process can change over time. We could require
that the client asks proc "is this still the signal thread for process
4711?" everytime it sends a signal, but perhaps it can be handled in a
smarter way.
Section: 6.5.1 Directory lookup across filesystems
I'm not sure I understand all the subtleties here, but for the
protocol side of it, I think it makes a lot of sense to do it in two
phases, as you sketch:
dir_lookup(directory, name)
never follows any translators, it just returns a capability for a node
and a flag that is set if there's a translator on top of the node. And
then use a completely separate call to follow the translator, if
desirable,
translator_lookup(node)
The libc code for open would do something like
int is_translator
cap_t node;
dir_lookup(dir, name, &node, &is_translator);
if (is_translator && !(flags & O_NOTRANS))
{
fstat(node, sbuf);
if (translator_is_trusted(&sbuf))
return translator_lookup(node);
}
return node;
(There should probably also be a flag O_TRANS or O_FORCE_TRANS that
always follows any translator settings, but you get the idea).
Section 8.8: Order of implementation
I know I shouldn't really argue about this since I'm not hacking on
the thing, but I have a strong feeling that one ought to start with
the very minimalistic "framework" needed in order to get some stupid
but working drivers for console, keyboard and IDE drive in place. But
you all probably knew that already.
Best regards,
/Niels (and please CC any replies to me)
- Comments on the hurd-on-l4 document,
Niels Möller <=
- Re: Comments on the hurd-on-l4 document, Marcus Brinkmann, 2005/06/08
- Re: Comments on the hurd-on-l4 document, Niels Möller, 2005/06/08
- Re: Comments on the hurd-on-l4 document, Niels Möller, 2005/06/08
- Re: Comments on the hurd-on-l4 document, Marcus Brinkmann, 2005/06/08
- Re: Comments on the hurd-on-l4 document, Yoshinori K. Okuji, 2005/06/08
- Re: Comments on the hurd-on-l4 document, Marcus Brinkmann, 2005/06/09