[Spread-users] spread daemon hangs after running for a few days

Jonathan Stanton jonathan at cnds.jhu.edu
Wed Mar 12 12:35:48 EDT 2008


Hi,

The monitor output is very intersting. A few points that jump out at me 
immediately.

1) You appear to be running some 3.17.3 daemons and some 3.17.4 daemons 
(version listed in first line of monitor output) I'd recommend upgrading 
all of them to 3.17.4 -- not only because of the bugfixes in the 
release, but because we don't recommend running mixed groups of daemons 
at different versions as changes between versions may break those 
setups -- and we don't test for 'inter-version' compatibility. Sometimes 
it works, but it often doesn't. 

2) Your overall traffic levels are low (3000 packets in 74000 seconds), 
so most of the time the daemon should be relatively idle (in cpu and 
network activity)

3) A few nodes like as-phl-cbiis1 are generating a *lot* of
retransmissions. 1251 b retrans out of 3000 finaly delivered packets and
as-ny-cbapps2 with 600 / 3000. (b retrans means it a broadcast
retransmission because more then 1 of the daemons in the configuration
missed the packet, so we rebroadcast it). So about 1/2 of the time the
packet is lost and has to be resent. 

So I would definitely look into the loss rates on the network and see if 
they can be improved. Now packet loss should not cause a daemon to 
'hang' or freeze excpet in certain very pathological states where the 
same packets that a daemon needs to synchronize are continually being 
lost but other packets are getting through (i.e. daemon retransmits and 
it is lost again, but other packets are getting through -- because if 
nothing gets through the daemon will determine that a network fault has 
occurred and will partition itself into a separate configuration.

If you are able to run the monitor command when one of the daemons is in 
the 'hung' state that would be very helpful in determining what is 
happening.

Cheers,

Jonathan

On Tue, Mar 11, 2008 at 03:04:11PM -0700, chanh hua wrote:
> john, 
> 
> when a daemon goes into a bad state, it doesn't just stutter, but stops working completely; however, other daemon in the cluster still functions correctly.
> 
> i did some further testing and the massive drops only occur btw these two servers.  if i used spsend and sprecv against any one of this server and another machine not defined in the segment, the drop comes down to the amount you specified.  so the drops is somehow confined to this segment.  will need to do some more digging.
> 
> looking at sptmonitor output, the "retrans" to "sent pack" seems high. do you think there is a network issue or configuration problem?
> 
> 
> 
> ============================
> Status at as-phl-cbiis4 V 3.17. 3 (state 1, gstate 1) after 74354 seconds :
> Membership  :  4  procs in 2 segments, leader is as-ny-cbapps2
> rounds   :   42454      tok_hurry :   38124     memb change:       9
> sent pack:       6      recv pack :    3040     retrans    :       0
> u retrans:       0      s retrans :       0     b retrans  :       0
> My_aru   :    2745      Aru       :    2745     Highest seq:    2745
> Sessions :       0      Groups    :      33     Window     :      60
> Deliver M:    1595      Deliver Pk:    3048     Pers Window:      15
> Delta Mes:    1595      Delta Pack:    2745     Delta sec  :   74354
> ==================================
> 
> Monitor>
> ============================
> Status at as-phl-cbiis1 V 3.17. 4 (state 1, gstate 1) after 73908 seconds :
> Membership  :  4  procs in 2 segments, leader is as-ny-cbapps2
> rounds   :   21796      tok_hurry :   37928     memb change:       4
> sent pack:    1708      recv pack :    1137     retrans    :    1291
> u retrans:      37      s retrans :       0     b retrans  :    1254
> My_aru   :    2745      Aru       :    2745     Highest seq:    2745
> Sessions :       9      Groups    :      56     Window     :      60
> Deliver M:    1595      Deliver Pk:    2944     Pers Window:      15
> Delta Mes:       0      Delta Pack:       0     Delta sec  :    -446
> ==================================
> 
> Monitor>
> ============================
> Status at as-ny-cbsql3 V 3.17. 3 (state 1, gstate 1) after 74106 seconds :
> Membership  :  4  procs in 2 segments, leader is as-ny-cbapps2
> rounds   :   42313      tok_hurry :   37997     memb change:       6
> sent pack:       5      recv pack :    3072     retrans    :       0
> u retrans:       0      s retrans :       0     b retrans  :       0
> My_aru   :    2745      Aru       :    2745     Highest seq:    2745
> Sessions :       0      Groups    :      19     Window     :      60
> Deliver M:    1595      Deliver Pk:    3040     Pers Window:      15
> Delta Mes:       0      Delta Pack:       0     Delta sec  :     198
> ==================================
> 
> Monitor>
> ============================
> Status at as-ny-cbapps2 V 3.17. 4 (state 1, gstate 1) after 73982 seconds :
> Membership  :  4  procs in 2 segments, leader is as-ny-cbapps2
> rounds   :   21797      tok_hurry :   37970     memb change:       4
> sent pack:    1145      recv pack :    1714     retrans    :     739
> u retrans:     109      s retrans :      37     b retrans  :     593
> My_aru   :    2745      Aru       :    2745     Highest seq:    2745
> Sessions :      20      Groups    :      56     Window     :      60
> Deliver M:    1595      Deliver Pk:    3028     Pers Window:      15
> Delta Mes:       0      Delta Pack:       0     Delta sec  :    -124
> ==================================
> 
> 
> John Schultz <jschultz at spreadconcepts.com> wrote: On Tue, 11 Mar 2008, chanh hua wrote:
> 
> > Bring me to my question, what network property is the misses data 
> > suppose to tell us?
> 
> The misses data tells you how many of the sent packets the receiver missed 
> (i.e. - didn't receive).  From your report it looks like the sender sent 
> 10000 packets but the receiver only heard 1999 (10000 - 8001) of them 
> before it got the last packet.  If correct, then that would be about an 
> 80% loss rate for your configuration.  A typical loss rate for LAN 
> broad/multicast is well below 1%.
> 
> > The explanation he gave for why we might have observed all
> > these misses was b/c the broadcast address used contains all
> > network machines(i.e. desktops, printers, etc...) and not
> > just servers and most of those machines ignore broadcast.
> > But since he doesn't know what these results mean, he can't
> > say for sure.
> 
> He is correct that broadcast will bother (i.e. - potentially increase 
> load) all the machines on the associated subnet.  If you instead use 
> multicast, then either your switch/router or you NICs should filter out 
> the packets before an interrupt is generated on non-participating 
> machines.  Multicast is preferable, however, occasionally some switches 
> and routers don't implement multicast well or their multicast is 
> misconfigured.  In such situations, broadcast sometimes works better due 
> to its simplicity.
> 
> > If this is an issue, would using a multicast address be better? 
> > However, when i used a multicast address for the test, i still saw a lot 
> > of misses.
> 
> Typically, broadcast should not increase loss versus multicast unless your 
> switch/router is biased against broadcast somehow for some weird reason.
> 
> > I talked to the network admin, and he's not seeing any drops
> > btw the servers on the segments.  And he confirmed the
> > broadcast address i used was correct.
> 
> Well, it definitely seems like something is wrong from your reports.  Try 
> using spmonitor to view the status of the daemons as they are running. 
> Like I said, if you see their retrans counts going up by more than a 
> couple a second, then something is probably wrong in your network.
> 
> > would having a lot of drops lead cause daemon to be unresponsive?
> 
> Theoretically, it could.  If the daemons got stuck in a loop of trying to 
> establish a membership due to intermittent / flaky communications with 
> other daemons, then the system would appear to freeze as the daemons stop 
> processing client communications in this state.  Usually, the "freeze"
> wouldn't persist forever but rather you would see lots of daemon 
> membership changes and progress would stutter forward.
> 
> Cheers!
> 
> ---
> John Schultz
> Spread Concepts
> Phn: 443 838 2200
> 
> 
>        
> ---------------------------------
> Never miss a thing.   Make Yahoo your homepage.
> _______________________________________________
> Spread-users mailing list
> Spread-users at lists.spread.org
> http://lists.spread.org/mailman/listinfo/spread-users


-- 
-------------------------------------------------------
Jonathan R. Stanton         jonathan at cs.jhu.edu
Dept. of Computer Science   
Johns Hopkins University    
-------------------------------------------------------




More information about the Spread-users mailing list