[Pacemaker] Service restoration in clone resource group
Sean Lutner
sean at rentul.net
Mon Oct 7 15:33:28 UTC 2013
Hello,
I have a two node pacemaker + CMAN cluster on CentOS 6.4 with the configuration shown below. I'm struggling to get the resources contained in the EIP-AND-VARNISH group back online after a failover on the node that failed.
I start with all varnish resources online on both nodes and the EIP resource online on node1. I stop the varnish service on node1 and all resources then failover to node2 as expected. However, if I then restart the varnish services on node1 and run a crm_resource --cleanup on those services, the cluster is disrupted and a full failover happens again.
My question is how I can perform the process of restarting the varnish resources on a previously failed node without causing a failover and so they are marked online (Started) on that node? Is it a matter of the order of the cleanup and starting or have I done something wrong in my configuration?
CLUSTER CONFIG
[root at node1 ~]# pcs config
Corosync Nodes:
Pacemaker Nodes:
node1 node2
Resources:
Resource: ClusterEIP_1.2.3.4 (provider=pacemaker type=EIP class=ocf)
Attributes: first_network_interface_id=eni-e4e0b68c second_network_interface_id=eni-35f9af5d first_private_ip=10.50.3.191 second_private_ip=10.50.3.91 eip=1.2.3.4 alloc_id=eipalloc-376c3c5f
Operations: monitor interval=30s
Clone: EIP-AND-VARNISH-clone
Group: EIP-AND-VARNISH
Resource: Varnish (provider=redhat type=varnish.sh class=ocf)
Operations: monitor interval=30s
Resource: Varnishlog (provider=redhat type=varnishlog.sh class=ocf)
Operations: monitor interval=30s
Resource: Varnishncsa (provider=redhat type=varnishncsa.sh class=ocf)
Operations: monitor interval=30s
Location Constraints:
Ordering Constraints:
ClusterEIP_1.2.3.4 then Varnish
Varnish then Varnishlog
Varnishlog then Varnishncsa
Colocation Constraints:
Varnish with ClusterEIP_1.2.3.4
Varnishlog with Varnish
Varnishncsa with Varnishlog
Cluster Properties:
dc-version: 1.1.8-7.el6-394e906
cluster-infrastructure: cman
last-lrm-refresh: 1381020426
expected-quorum-votes: 2
stonith-enabled: false
no-quorum-policy: ignore
CONSTRAINT AND RSC DEFAULTS
[root at node1 ~]# pcs constraint all
Location Constraints:
Ordering Constraints:
ClusterEIP_1.2.3.4 then Varnish (Mandatory) (id:order-ClusterEIP_1.2.3.4-Varnish-mandatory)
Varnish then Varnishlog (Mandatory) (id:order-Varnish-Varnishlog-mandatory)
Varnishlog then Varnishncsa (Mandatory) (id:order-Varnishlog-Varnishncsa-mandatory)
Colocation Constraints:
Varnish with ClusterEIP_1.2.3.4 (INFINITY) (id:colocation-Varnish-ClusterEIP_1.2.3.4-INFINITY)
Varnishlog with Varnish (INFINITY) (id:colocation-Varnishlog-Varnish-INFINITY)
Varnishncsa with Varnishlog (INFINITY) (id:colocation-Varnishncsa-Varnishlog-INFINITY)
[root at node1 ~]# pcs resource rsc defaults
resource-stickiness: 100
migration-threshold: 1
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 235 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20131007/fb006b03/attachment-0003.sig>
More information about the Pacemaker
mailing list