[ClusterLabs] vip is not removed after node lost connection with the other two nodes

Ken Gaillot kgaillot at redhat.com
Fri Jun 23 09:48:35 EDT 2017


On 06/22/2017 09:44 PM, Hui Xiang wrote:
> Hi guys,
> 
>   I have setup 3 nodes(node-1, node-2, node-3) as controller nodes, an
> vip is selected by pacemaker between them, after manually make the
> management interface down in node-1 (used by corosync) but still have
> connectivity to public or non-management network, I was expecting that
> the vip in node-1 will be stop/remove by pacemaker since this node lost
> connection with the other two node, however, now there are two vip in
> the cluster, below is my configuration:
> 
> [node-1]
> Online: [ node-1.domain.tld node-2.domain.tld node-3.domain.tld ]
>  vip__public_old(ocf::es:ns_IPaddr2):Started node-1.domain.tld 
> 
> [node-2 node-3]
> Online: [ node-2.domain.tld node-3.domain.tld ]
> OFFLINE: [ node-1.domain.tld ]
>  vip__public_old(ocf::es:ns_IPaddr2):Started node-3.domain.tld 
> 
> 
> My question is am I miss any configuration, how can I make vip removed
> in node-1, shouldn't crm status in node-1 be:
> [node-1]
> Online: [ node-1.domain.tld ]
> OFFLINE: [  node-2.domain.tld node-3.domain.tld ] 
> 
> 
> Thanks much.
> Hui.

Hi,

How did you make the cluster interface down? If you're blocking it via
firewall, be aware that you have to block *outbound* traffic on the
corosync port.

Do you have stonith working? When the cluster loses a node, it recovers
by fencing it.




More information about the Users mailing list