tsp-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Tsp-devel] Flooding the BB msg queue


From: Eric Noulard
Subject: Re: [Tsp-devel] Flooding the BB msg queue
Date: Thu, 21 Jun 2007 00:28:52 +0200

2007/6/20, Frederik Deweerdt <address@hidden>:
On Wed, Jun 20, 2007 at 04:16:43PM +0200, Eric Noulard wrote:
> May be you can tell us the raw specification of what you need?
Yep, should have started with that one. I've several clients that
are working with the same black board. There's one master (the first
launching process), and N clients. When setting a variable in the
BB through the master, the clients are supposed to be notified that
something is happening.

OK I see, you really something like "broadcast" notification of the
kind of pthread_cond_broadcast() with the eventual added value
of "memorizing" the broadcast.
Because here if the client comes late broadcast is lost whereas
with message queue message may be received later.

In that case a bb_snd_msg() is called with mtype
being the PIDs of the clients[1].

I assume your send 1 message for each client.

Hence the problem: if one of them dies unexpectedly, we fill the queue,
and we don't even know that. So no one gets notified anymore.

OK. Now if you know the queue is full, other live client may ALREADY have
lost notification for a while, even if the queue is not already full of
the dead client message.
In order to discover which is the dead client you'll have to "wait" until
the queue is effectively full of dead client messages or beginning
to dig the queue for trying to guess who is the dead.

If you had 1 queue per client then this issues disappears:
if the queue is full then you may consider the client is dead
(note that stalled client may well be considered dead because
it stops consuming messages).

[1] The PIDs are known because an REGISTER msg is sent after a client
successfully attaches to the BB

Ah ah you had a REGISTER message!!
Then when receiving register message, you may create a NEW message
queue for this particular client and send back a REGISTERED message
from server to client with mtext containing the IPC ID of the freshly created
queue.

The problem is your server and client now have "out of blackboard"
message queue and you may not use the NEW message queue with
the current bb_msgq_send API. Either you end up using direct
msgsnd SysV API  or you created a new
bb_msg_send_private API which would be the same as the current
bb_msgq_send plus a "private id" argument specifying the queue identifier.

Now I would say that there is another way to do it all more "implicit".
Each time a process attach itself to the BB you get
shm_nattch incremented (see shmctl(2) + IPC_STAT + struct shmid_ds).

So when the server sends the messages he may puts as many
messages as there are attached processes.
When the number of processes gets down the number of message
gets down too. The only (I currently imagine) trouble with this scheme is
that one "fast" client may starve others from getting messages by
"eating" all messages. Remember that in my case the server
send as many IDENTICAL messages as there are client.

Now that we have only one queue and several messages.
This is still bad from a CPU time used on the server side
since its load grow with the number of client.

Now imagine that server now only sends a single message
for which mtext is an integer. The integer value is the
number of client attached to BB.

Now when a client recv the message it looks at mtext
integer value and decrement the value AND send the
message back to the queue. When a client gets a message
whose mtext is 1 then he does not send a message to the queue.

Now a last (may be the good one solution) you do not
make mtext vary but mtype in order to
avoid re-receiving the message you have sent.
The issue here is how to make mtype vary on recv
for each processes, even if some process die.
I am convinced (but not sure yet) that we may make
mtype vary using modulo "number of attached" processes
in order to be sure every processes receive the message only once.

Nevertheless you get the ideas, I love the daisy-chain idea
from client very much because it make the load to be distributed
on client. The price being a ?perhaps? greater delay because
of this daisy chain.

--
Erk




reply via email to

[Prev in Thread] Current Thread [Next in Thread]