[Pacemaker] two nodes fenced when drbd link fails

Ivan Coronado icoronado at epcge.com
Thu May 13 13:37:31 EDT 2010


Hello to everybody,
 
I have a problem with the corosync.conf setup. I have a drbd service
runing on eth3, and a general network and the stonith device (idrac6) in
the eth0. If I unplug the eth3 to simulate a network failure two nodes
are fenced (first the slave followed by the master). If I only leave
ringnumber 0 in the coroync.conf file I don't have this problem. Is this
normal operation?
 
Here you have the section of corosync.conf where I have the problem, and
thanks for the help.
 
        rrp_mode: active
        interface {
                # eth0
                ringnumber: 0
                bindnetaddr: 200.200.201.0
                mcastaddr: 226.94.1.1
                mcastport: 5405
        }
        interface {
                #eth3
                ringnumber: 1
                bindnetaddr: 192.168.2.0
                mcastaddr: 226.94.1.2
                mcastport: 5406
        }

-----
Ivan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100513/e290fbb4/attachment.html>


More information about the Pacemaker mailing list