[Pacemaker] two nodes fenced when drbd link fails
Vadym Chepkov
vchepkov at gmail.com
Fri May 14 02:03:23 UTC 2010
On May 13, 2010, at 1:37 PM, Ivan Coronado wrote:
> Hello to everybody,
>
> I have a problem with the corosync.conf setup. I have a drbd service runing on eth3, and a general network and the stonith device (idrac6) in the eth0. If I unplug the eth3 to simulate a network failure two nodes are fenced (first the slave followed by the master). If I only leave ringnumber 0 in the coroync.conf file I don't have this problem. Is this normal operation?
>
> Here you have the section of corosync.conf where I have the problem, and thanks for the help.
>
> rrp_mode: active
> interface {
> # eth0
> ringnumber: 0
> bindnetaddr: 200.200.201.0
> mcastaddr: 226.94.1.1
> mcastport: 5405
> }
> interface {
> #eth3
> ringnumber: 1
> bindnetaddr: 192.168.2.0
> mcastaddr: 226.94.1.2
> mcastport: 5406
> }
> -----
> Ivan
I read in the list openais at lists.osdl.org setting ports at least two apart helps (5405, 5407)
Vadym
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100513/b28fca0a/attachment-0002.htm>
More information about the Pacemaker
mailing list