[Pacemaker] two nodes fenced when drbd link fails
Dejan Muhamedagic
dejanmm at fastmail.fm
Mon May 17 09:36:11 UTC 2010
Hi,
On Thu, May 13, 2010 at 07:37:31PM +0200, Ivan Coronado wrote:
> Hello to everybody,
>
> I have a problem with the corosync.conf setup. I have a drbd service
> runing on eth3, and a general network and the stonith device (idrac6) in
> the eth0. If I unplug the eth3 to simulate a network failure two nodes
> are fenced (first the slave followed by the master). If I only leave
> ringnumber 0 in the coroync.conf file I don't have this problem. Is this
> normal operation?
No, but one can't say without looking at the logs what happened.
Thanks,
Dejan
> Here you have the section of corosync.conf where I have the problem, and
> thanks for the help.
>
> rrp_mode: active
> interface {
> # eth0
> ringnumber: 0
> bindnetaddr: 200.200.201.0
> mcastaddr: 226.94.1.1
> mcastport: 5405
> }
> interface {
> #eth3
> ringnumber: 1
> bindnetaddr: 192.168.2.0
> mcastaddr: 226.94.1.2
> mcastport: 5406
> }
>
> -----
> Ivan
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
More information about the Pacemaker
mailing list