[Pacemaker] two nodes fenced when drbd link fails

Ivan Coronado icoronado at epcge.com
Fri May 14 11:24:46 EDT 2010


Oh nop, sorry, my mistake, it doesn't works.... :(
 
 
Ivan 

________________________________

De: Ivan Coronado [mailto:icoronado at epcge.com] 
Enviado el: viernes, 14 de mayo de 2010 9:02
Para: The Pacemaker cluster resource manager
Asunto: Re: [Pacemaker] two nodes fenced when drbd link fails


Thanks! 
 
it's works!!!
 
Ivan 

________________________________

De: Vadym Chepkov [mailto:vchepkov at gmail.com] 
Enviado el: viernes, 14 de mayo de 2010 4:03
Para: The Pacemaker cluster resource manager
Asunto: Re: [Pacemaker] two nodes fenced when drbd link fails



On May 13, 2010, at 1:37 PM, Ivan Coronado wrote:


	Hello to everybody,
	 
	I have a problem with the corosync.conf setup. I have a drbd
service runing on eth3, and a general network and the stonith device
(idrac6) in the eth0. If I unplug the eth3 to simulate a network failure
two nodes are fenced (first the slave followed by the master). If I only
leave ringnumber 0 in the coroync.conf file I don't have this problem.
Is this normal operation?
	 
	Here you have the section of corosync.conf where I have the
problem, and thanks for the help.
	 
	        rrp_mode: active
	        interface {
	                # eth0
	                ringnumber: 0
	                bindnetaddr: 200.200.201.0
	                mcastaddr: 226.94.1.1
	                mcastport: 5405
	        }
	        interface {
	                #eth3
	                ringnumber: 1
	                bindnetaddr: 192.168.2.0
	                mcastaddr: 226.94.1.2
	                mcastport: 5406
	        }
	
	-----
	Ivan



I read in the list openais at lists.osdl.org setting ports at least two
apart helps (5405, 5407)

Vadym

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100514/63768e29/attachment-0001.html>


More information about the Pacemaker mailing list