[Spread-users] Ordering

P. Krishna pkrishna at revasystems.com
Tue Jun 22 18:25:47 EDT 2004


So, is it correct that even if a group is not using Agreed or Safe
delivery but using a simple basic/fifo delivery method, those pkts for
the group still have to traverse the "logical" ring and incur the
similar latencies of Agreed/Safe messages?

Krishna

On Tue, 2004-06-22 at 14:18, John Schultz wrote:
> The token based protocol of Spread 3 does incur some extra latency 
> because a daemon only sends a message on the network when it has the 
> token.  In the worst case, the daemon just released the token and has to 
> wait for it to traverse the entire ring before it can send any more 
> messages.  Also, if your network is lossy, then delivery may be delayed 
> at recipients (at least for AGREED msgs) until earlier messages are 
> recovered and delivered to maintain a local total order.
> 
> Also, remember that with Spread you have a client-server architecture. 
> So when you send a message across the network the one way latency is the 
> total time it takes for: sender -> local Spread daemon -> remote Spread 
> daemon -> receiver.
> 
> This multi-step architecture incurs some additional latency even when 
> the sender and receiver are local w.r.t. their Spread daemons. The best 
> way to reduce this type of latency would probably be to tune your 
> kernels to swap processes and respond to I/O requests faster somehow.
> 
> As to your main question, I wouldn't expect CPU power to affect latency 
> numbers for this protocol as much as the size of your ring, the 
> latency/lossiness of your network, the tuning of Spread and the tuning 
> of your kernel, probably in that order of importance.
> 
> P. Krishna wrote:
> > Hello Yair,
> > Thank you for the prompt response. I read the 2 pager. 
> > Spread maintains only one ring of spread daemons, and the seq# is global
> > across all groups.
> > 
> > Looking @ Figure 1.4 which plots mesg latency versus number of groups.
> > If I look at the curve for 10000B packets, the latency is about 2.8ms.
> > The xmission latency due to pkt over 100Mb link is 0.8ms - what is the
> > breakup of the remaining 2ms? How much of it is due to CPU (P3 in this
> > expt) and how much is due to the ordering protocol - i.e., token based
> > access?
> > 
> > I am just trying to get a gauge of performance expectation if we use a
> > different setup for example, faster CPUs but continue using 100Mb/s
> > links.
> > 
> > Thanks,
> > Krishna
> > 




More information about the Spread-users mailing list