jonathan at cnds.jhu.edu
Tue Jun 22 18:58:28 EDT 2004
I'm not sure what you are defining as "basic" delivery method (i.e. if
it is different from FIFO) as there is an unordered service in Spread
called RELIABLE that does not guarantee ordering at all. If you mean that
FIFO messages have the same cost as Agreed, then that is correct with this
version of Spread. Both of them have LESS latency then SAFE messages.
One other detail is that NO messages 'traverse the "logical" ring'. All
messages in Spread as multicast or broadcast as a single UDP packet that
is received by all daemons (in a single segment -- multiple segment
configurations are more complicated but the same principle is true). The
only message that travels in a 'ring' around all of the daemons is the
token packet itself (which only holds a few counters and any requested
retransmission ids because of packet loss). The reason the latency is
higher with Agreed is that each daemon can only send new messages/packets
on the local ethernet when it holds the token, so it has to wait while the
token rotates amoung all of the daemons. Once it gets the token it sends
it's share of messages(regulated by the flow control) as UDP mcast/bcasts
and all of the other daemons receive them immediately (the latency of a
udp packet to cross the ethernet)
I hope that answers your question.
On Tue, Jun 22, 2004 at 06:25:47PM -0400, P. Krishna wrote:
> So, is it correct that even if a group is not using Agreed or Safe
> delivery but using a simple basic/fifo delivery method, those pkts for
> the group still have to traverse the "logical" ring and incur the
> similar latencies of Agreed/Safe messages?
> On Tue, 2004-06-22 at 14:18, John Schultz wrote:
> > The token based protocol of Spread 3 does incur some extra latency
> > because a daemon only sends a message on the network when it has the
> > token. In the worst case, the daemon just released the token and has to
> > wait for it to traverse the entire ring before it can send any more
> > messages. Also, if your network is lossy, then delivery may be delayed
> > at recipients (at least for AGREED msgs) until earlier messages are
> > recovered and delivered to maintain a local total order.
> > Also, remember that with Spread you have a client-server architecture.
> > So when you send a message across the network the one way latency is the
> > total time it takes for: sender -> local Spread daemon -> remote Spread
> > daemon -> receiver.
> > This multi-step architecture incurs some additional latency even when
> > the sender and receiver are local w.r.t. their Spread daemons. The best
> > way to reduce this type of latency would probably be to tune your
> > kernels to swap processes and respond to I/O requests faster somehow.
> > As to your main question, I wouldn't expect CPU power to affect latency
> > numbers for this protocol as much as the size of your ring, the
> > latency/lossiness of your network, the tuning of Spread and the tuning
> > of your kernel, probably in that order of importance.
> > P. Krishna wrote:
> > > Hello Yair,
> > > Thank you for the prompt response. I read the 2 pager.
> > > Spread maintains only one ring of spread daemons, and the seq# is global
> > > across all groups.
> > >
> > > Looking @ Figure 1.4 which plots mesg latency versus number of groups.
> > > If I look at the curve for 10000B packets, the latency is about 2.8ms.
> > > The xmission latency due to pkt over 100Mb link is 0.8ms - what is the
> > > breakup of the remaining 2ms? How much of it is due to CPU (P3 in this
> > > expt) and how much is due to the ordering protocol - i.e., token based
> > > access?
> > >
> > > I am just trying to get a gauge of performance expectation if we use a
> > > different setup for example, faster CPUs but continue using 100Mb/s
> > > links.
> > >
> > > Thanks,
> > > Krishna
> > >
> Spread-users mailing list
> Spread-users at lists.spread.org
Jonathan R. Stanton jonathan at cs.jhu.edu
Dept. of Computer Science
Johns Hopkins University
More information about the Spread-users