l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: On hierarchy


From: Udo A. Steinberg
Subject: Re: On hierarchy
Date: Sun, 23 Oct 2005 23:32:59 +0200

On Sun, 23 Oct 2005 13:12:58 -0400 Jonathan S. Shapiro (JSS) wrote:

JSS> Here is the problem, but we are getting way off topic.
JSS> 
JSS> Two threads High Priority Thread and Low Priority Thread (resp. HPT,
JSS> LPT) are calling a common service. LPT calls first, and the service
JSS> grabs a lock. HPT now calls the service and needs the lock.
JSS> 
JSS> Question: which thread to escalate in order to get the lock released?
JSS> Answer: whichever thread will ultimately cause the lock to be released.

L4/Fiasco handles this as follows:
HPT obviously has the highest priority in the set of threads we're talking
about. HPT depends on SVC and LPT depends on SVC. HPT is blocked on the lock
it's trying to acquire and thus cannot run. But it can provide its time
quantum and priority (we call this tuple the scheduling context) to help
release the lock. The scheduler thus selects HPT's scheduling context and
follows the dependencies from that thread: HPT -> SVC. SVC is ready and
is dispatched (we say SVC is the execution context). The net effect is that
SVC runs with HPT's priority and consumes HPT's time. Once SVC releases the
lock, HPT becomes ready, the scheduler is invoked, selects HPT as scheduling
context and execution context, which effectively ends the donation.

JSS> Question: which thread is that?
JSS> Answer: not necessarily LPT. In fact, if calls and returns are
JSS> interleaved by the service it could be any thread in the system at all;
JSS> even one that has not been created yet.

The highest-priority thread that has a dependency on SVC will donate its
scheduling context to SVC and SVC will run with elevated priority until it
releases the lock the high-priority thread blocks on. For your example the
answer is: LPT will definitely NOT run. Note that if there is a high-priority
thread that does not depends on SVC, then that high-priority thread will run
and SVC and threads depending on it won't make progress.

JSS> The atomicity contract guarantees that this is incorrect. First, by
JSS> "short held" I mean "locks that are only held for the duration of a
JSS> single, atomic system call". Within such a system, no deadlock is
JSS> possible. All locks must be grabbed before the commit point, and the
JSS> process that would deadlock simply backs out and starts over.

Recognizing that you will deadlock doesn't sound like a trivial thing to me.
Ensuring that you always grab locks in the same order seems much more simple.

JSS> > It's late and I fail to see how downgrading rights from read/write to
JSS> > read-only is any different from downgrading rights from present to
JSS> > non-present, except that we don't deallocate the affected nodes.
JSS> > Can you provide an example that illustrates your point?
JSS> 
JSS> I think so.
JSS> 
JSS> The case where you deallocate the nodes is much simpler. A node that has
JSS> been deallocated obviously does not need to be visited again.
JSS> 
JSS> However, if you are merely downgrading the nodes, you may have an
JSS> arbitrarily large number of nodes to visit. Let is imagine that there
JSS> exist 100,000 nodes to visit, and that we have set our arbitrary
JSS> constant bound at 100 steps. The algorithm must now proceed by doing 100
JSS> nodes at a time, and it therefore needs some way to keep track of where
JSS> it is in the tree (how much progress it has made).
JSS> 
JSS> We have already discussed that a debugger can make this operation take a
JSS> long time. Now consider:
JSS> 
JSS>    A initiates unmap
JSS>    B halts A (using debugging interface)
JSS>    C destroys A
JSS> 
JSS> Note that C is now blocked by B, who may not be a collaborator. This
JSS> violates the design rule that the party paying for storage should always
JSS> be able to promptly reclaim it.

If you require such a design rule, then unmap is not your mechanism of choice.
Like I said, unmap latency depends on the size of the tree to be visited in
order to carry out the unmap (or downgrade) operation.
If you want an upper time bound on the unmap operation, then you must place
an upper bound on the size of the affected mapping tree. For example you must
extend map and friends such that the mapper can specify that the mappee may
itself only map the received item N more times.

JSS> > Which performance numbers are you referring to?
JSS> 
JSS> You indicated that the UNMAP operation could be made restartable by
JSS> marking the mapping nodes "busy" in some way. This introduces checks in
JSS> both the map and the unmap code. Are these checks included in the
JSS> published performance numbers?

I still don't know which published performance numbers you are referring to.
Do you have a link to the document?

-Udo.

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]