[Spread-users] Connecting spread daemons with different segment configurations
John Lane Schultz
jschultz at spreadconcepts.com
Thu Jan 29 11:37:02 EST 2015
> We just tried to define more than 20 segments in the spread configuration, but this is not valid due to the following ...
I think you meant:
#define MAX_SEGMENTS 20
Line 66 of daemon/spread_params.h
> First of all, it seems a bit sloppy to me, that there is no "#ifndef MAX_PROC_NAME", so that one can define this macro variable during configuration time with "./configure".
It was never intended for MAX_SEGMENTS to be so easily a reconfigurable constant because of the following:
> On the other hand I want to know, why have you chosen such a small number?
Three main reasons.
First, it was expected that in LAN type environments that support broadcast and multicast that this network capability would be exploited by putting all such daemons in the same segment, reducing the need for so many segments. So, multiple segments was really meant to support WAN type deployments and not degenerate LAN deployments. Getting a membership algorithm to converge across a WAN with more and more segment representatives in each-to-all communication requires exponentially more communication globally and typically requires ever less sensitive timeouts to allow convergence without membership “flapping” causing needless membership reformations.
Second, the way the membership algorithm currently is implemented, all membership information for a potential ring needs to fit within a single Spread packet, which is typically 1472 bytes minus overhead*. Space is used on the token for each segment and in the worst case there could be an entry for each segment and every daemons’ ID. So there is a space constraint on how many of these things can be supported without reworking this mechanism. Modifying this and similar membership constants without being aware of such constraints is dangerous and likely to fail.
Finally, we try to discourage people from using 1 daemon per segment configurations unless it is absolutely required because of the inefficient communication where each daemon needs to send each data message individually to each segment leader.
* - This was true before Spread 4.4. In Spread 4.4, we allow the most problematic membership packets to go up to 64KiB by leaning on IP de/fragmentation. If your membership info exceeds a single MTU then a warning will be logged and any underlying network loss will be magnified for these packets, but in low loss environments it may well go unnoticed.
> In you manual you describe a tested configuration of 60 daemons (which is totally OK for us).
Yes, we have seen even larger deployments than that. However, all such deployments, so far as I know, exploit the multi-daemon per segment (i.e. - network multicast or broadcast) functionality for both performance and capability reasons (i.e. - the above constraints).
> Are there any big disadvantages of huge segment setups and what happens if I increase the number of daemons?
That depends on what you mean by “huge segment setups.”
If you mean one daemon per segment and many such segments, then, yes, there are big disadvantages to such deployments versus multiple-daemon per segment configurations. The main problems, which I alluded to above are the communication costs, and membership stability (in particular on membership formation).
On the other hand, adding more daemons to a segment has very low cost as the main effect is simply to extend the control token ring by one more member. Even so, token ring protocols don’t scale up well to huge numbers of participants and trying to push a Spread configuration towards or even beyond 100 daemons may make the system prone to daemon membership churn, even in a LAN environment, without tailoring the membership timeouts appropriately.
> My first guess would be, that the communication overhead is increasing exponentially, due to the fact that the spread daemons always try to test their communication partners.
During membership formation, yes, globally that communication scales exponentially with the number of segments. Once a ring is formed, data communication (cpu overhead) scales linearly with the number of segments while the primary control communication scales linearly with the number of alive daemons per token ring cycle.
> So is it safe to just increase the "MAX_PROC_NAME" variable?
It may well work up to some point depending on how exactly many daemons and segments you try to use and how many daemons are active at any given time, up until you trip over the packet space constraint that I discussed above (pre Spread 4.4.) or underlying network loss gets magnified too much (Spread 4.4.).
John Lane Schultz
Spread Concepts LLC
Cell: 443 838 2200
On Jan 29, 2015, at 10:21 AM, Timo Korthals <tkorthals at cit-ec.uni-bielefeld.de> wrote:
Dear spread users,
we are trying to setup a huge (< 60) spread daemon network.
We just tried to define more than 20 segments in the spread
configuration, but this is not valid due to the following line in
#define MAX_PROC_NAME 20 /* largest possible size of
process name of daemon */
First of all, it seems a bit sloppy to me, that there is no "#ifndef
MAX_PROC_NAME", so that one can define this macro variable during
configuration time with "./configure".
On the other hand I want to know, why have you chosen such a small number?
In you manual you describe a tested configuration of 60 daemons (which
is totally OK for us).
Are there any big disadvantages of huge segment setups and what happens
if I increase the number of daemons?
My first guess would be, that the communication overhead is increasing
exponentially, due to the fact that the spread daemons always try to
test their communication partners.
In fact, for our setup we just want to send a really small amount of
data, and only want the advantage of the connectability.
So is it safe to just increase the "MAX_PROC_NAME" variable?
Timo Korthals, M.Sc.
AG Kognitronik & Sensorik
Exzellenzcluster Cognitive Interaction Technology (CITEC)
Inspiration 1 (Zehlendorfer Damm 199)
33619 Bielefeld - Germany
Office : 3.037
Phone : +49 521 106-67368
eMail : tkorthals at cit-ec.uni-bielefeld.de
Spread-users mailing list
Spread-users at lists.spread.org
More information about the Spread-users