chicken-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Chicken-users] Enterprise and scalability


From: Alaric Snell-Pym
Subject: Re: [Chicken-users] Enterprise and scalability
Date: Mon, 26 Jan 2009 12:38:57 +0000


On 26 Jan 2009, at 8:38 am, Jörg F. Wittenberger wrote:

I need to read more about askemos - my day job is writing a replicated
database (in C, alas). We have a local database on each node, which
many processes can read but only one writes;

So it's a typical master-slave setup. That's going to faster on writes,
since the master can always run ahead.

Within the domain of each node, yes, but the "master" on each node is just a peer to the masters on every other node - so although within a node, the death of the master daemon ruins everything, there's no single point of hardware failure.

But from a security point of view, it is as vulnerable as the master is.

Indeed - we're sticking with the trust-your-nodes approach; even if we didn't want to, it'd be forced on us by Spread, because:

(We use spread to handle the reliable multicasting for us)

When I came about spread, I considered a switch.  But there where
several road blocks (which where at least half way solved for Askemos at
that time): how to handle joins of new hosts (to my knowledge spread
will require some reconfiguration),

...these days Spread supports on-the-fly reconfiguration, but its main weakness is a lack of application-level flow control. Sure, it'll block writers if the *network* starts to drop packets due to congestion, but if a receiving process doesn't pull messages out of its connection to Spread fast enough, Spread will kill it (so it can report it dead, then report that all the waiting messages were delivered to everyone who's not dead), rather than blocking writes! Sigh.

However, Spread is nice in that its bandwidth-efficient due to its use of true multicasting. We can run with ten replicas, very nearly as fast as with just two!


/Jörg


ABS

--
Alaric Snell-Pym
Work: http://www.snell-systems.co.uk/
Play: http://www.snell-pym.org.uk/alaric/
Blog: http://www.snell-pym.org.uk/?author=4






reply via email to

[Prev in Thread] Current Thread [Next in Thread]