l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Reliability of RPC services


From: Pierre THIERRY
Subject: Re: Reliability of RPC services
Date: Tue, 25 Apr 2006 17:38:39 +0200
User-agent: Mutt/1.5.11+cvs20060403

Scribit Marcus Brinkmann dies 25/04/2006 hora 13:06:
> Cancellation of the request in C is no problem.  However, this
> cancellation will have no influence on anything but C (at this point,
> I am not considering cancellation forwarding).

But we were talking about sessionless protocols. I'm not sure we can
achieve cancellation forwarding without sessions.

Though there may be a mean: what if cancellation would just be some
operation on the reply FCRB, and is it possible for a process to have
some information about the FCRB it is planning to use to send the reply?
I suppose this also needs the heartbeat.

I'll temporarily name the operation severing the FCRB, though I'm not
sure it is appropriate.

This would become:

- C invokes a cap on S, providing it a FCRB->C
- S invokes a cap on T, providing it a FCRB->S
- at each heartbeat, S checks the status of their reply FCRB
- T doesn't reply in a timely manner
- user cancels the operation
- C severs FCRB->C
- at the next heartbeat, when S checks the status of FCRB->C, it is
  informed it has been severed
- S severs FCRB->S

at this point, C and S have fully recovered from T's misbehaviour.
There's no leakage.

- if T also checks, relying on the heartbeat, the status of FCRB->S, it
  will also know operation has been cancelled, it too can fully recover

IIUC, there's nothing to add to the current coyotos kernel interfaces to
achieve this. To sever the FCRB, you just have to invoke the destroy
method, AFAIK. Checking the status should just be a matter of checking
getType() to know if the capability is a null capability.

Allegedly,
Nowhere man
-- 
address@hidden
OpenPGP 0xD9D50D8A

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]