[Pacemaker] help building 2 node config

Alex Samad - Yieldbroker Alex.Samad at yieldbroker.com
Thu Mar 13 01:13:59 EDT 2014


Well I think I have worked it out 


# Create ybrp ip address  
pcs resource create ybrpip ocf:heartbeat:IPaddr2 params ip=10.172.214.50 cidr_netmask=24 nic=eth0 clusterip_hash=sourceip-sourceport \
    op start interval="0s" timeout="60s" \
    op monitor interval="5s" timeout="20s" \
    op stop interval="0s" timeout="60s" \
	
# Clone it
#pcs resource clone ybrpip globally-unique=true clone-max=2 clone-node-max=2

# Create status
pcs resource create ybrpstat ocf:yb:ybrp op \
    op start interval="10s" timeout="60s" \
    op monitor interval="5s" timeout="20s" \
    op stop interval="10s" timeout="60s" \

	
	


# clone it it
pcs resource clone ybrpip globally-unique=true clone-max=2 clone-node-max=2
pcs resource clone ybrpstat globally-unique=false clone-max=2 clone-node-max=2

pcs constraint colocation add ybrpip ybrpstat INFINITY
pcs constraint colocation add ybrpip-clone ybrpstat-clone INFINITY
pcs constraint order ybrpstat then ybrpip
pcs constraint order ybrpstat-clone then ybrpip-clone
pcs constraint location ybrpip prefers devrp1
pcs constraint location ybrpip-clone prefers devrp2


Have I done anything silly ?

Also as I don't have the application actually running on my nodes, I notice fails occur very fast, more than 1  sec, where its that configured and how do I configure it such that after 2 or 3,4 or 5 attempts it fails over to the other node. I also want then resources to move back to the original nodes when they come back

So I tried the config above and when I rebooted node a the ip address on A went to node B, but when A came back it didn't move back to node A




pcs config
Cluster Name: ybrp
Corosync Nodes:
 
Pacemaker Nodes:
 devrp1 devrp2 

Resources: 
 Clone: ybrpip-clone
  Meta Attrs: globally-unique=true clone-max=2 clone-node-max=2 
  Resource: ybrpip (class=ocf provider=heartbeat type=IPaddr2)
   Attributes: ip=10.172.214.50 cidr_netmask=24 nic=eth0 clusterip_hash=sourceip-sourceport 
   Operations: start interval=0s timeout=60s (ybrpip-start-interval-0s)
               monitor interval=5s timeout=20s (ybrpip-monitor-interval-5s)
               stop interval=0s timeout=60s (ybrpip-stop-interval-0s)
 Clone: ybrpstat-clone
  Meta Attrs: globally-unique=false clone-max=2 clone-node-max=2 
  Resource: ybrpstat (class=ocf provider=yb type=ybrp)
   Operations: start interval=10s timeout=60s (ybrpstat-start-interval-10s)
               monitor interval=5s timeout=20s (ybrpstat-monitor-interval-5s)
               stop interval=10s timeout=60s (ybrpstat-stop-interval-10s)

Stonith Devices: 
Fencing Levels: 

Location Constraints:
  Resource: ybrpip
    Enabled on: devrp1 (score:INFINITY) (id:location-ybrpip-devrp1-INFINITY)
  Resource: ybrpip-clone
    Enabled on: devrp2 (score:INFINITY) (id:location-ybrpip-clone-devrp2-INFINITY)
Ordering Constraints:
  start ybrpstat then start ybrpip (Mandatory) (id:order-ybrpstat-ybrpip-mandatory)
  start ybrpstat-clone then start ybrpip-clone (Mandatory) (id:order-ybrpstat-clone-ybrpip-clone-mandatory)
Colocation Constraints:
  ybrpip with ybrpstat (INFINITY) (id:colocation-ybrpip-ybrpstat-INFINITY)
  ybrpip-clone with ybrpstat-clone (INFINITY) (id:colocation-ybrpip-clone-ybrpstat-clone-INFINITY)

Cluster Properties:
 cluster-infrastructure: cman
 dc-version: 1.1.10-14.el6-368c726
 last-lrm-refresh: 1394682724
 no-quorum-policy: ignore
 stonith-enabled: false

the constraints should have moved it back to node A ???

pcs status
Cluster name: ybrp
Last updated: Thu Mar 13 16:13:40 2014
Last change: Thu Mar 13 16:06:21 2014 via cibadmin on devrp1
Stack: cman
Current DC: devrp2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured
4 Resources configured


Online: [ devrp1 devrp2 ]

Full list of resources:

 Clone Set: ybrpip-clone [ybrpip] (unique)
     ybrpip:0   (ocf::heartbeat:IPaddr2):       Started devrp2 
     ybrpip:1   (ocf::heartbeat:IPaddr2):       Started devrp2 
 Clone Set: ybrpstat-clone [ybrpstat]
     Started: [ devrp1 devrp2 ]




> -----Original Message-----
> From: Alex Samad - Yieldbroker [mailto:Alex.Samad at yieldbroker.com]
> Sent: Thursday, 13 March 2014 2:07 PM
> To: pacemaker at oss.clusterlabs.org
> Subject: [Pacemaker] help building 2 node config
> 
> Hi
> 
> I sent out an email to help convert an old config. Thought it might better to
> start from scratch.
> 
> I have 2 nodes, which run an application (sort of a reverse proxy).
> Node A
> Node B
> 
> I would like to use OCF:IPaddr2 so that I can load balance IP
> 
> # Create ybrp ip address
> pcs resource create ybrpip ocf:heartbeat:IPaddr2 params ip=10.172.214.50
> cidr_netmask=24 nic=eth0 clusterip_hash=sourceip-sourceport \
>     op start interval="0s" timeout="60s" \
>     op monitor interval="5s" timeout="20s" \
>     op stop interval="0s" timeout="60s" \
> 
> # Clone it
> pcs resource clone ybrpip2 ybrpip meta master-max="2" master-node-
> max="2" clone-max="2" clone-node-max="1" notify="true"
> interleave="true"
> 
> 
> This seems to work okay but I tested
> On node B I ran this
> crm_mon -1 ; iptables -nvL INPUT | head -5 ; ip a ; echo -n [ ; cat
> /proc/net/ipt_CLUSTERIP/10.172.214.50 ; echo ]
> 
> in particular I was watching /proc/net/ipt_CLUSTERIP/10.172.214.50
> 
> and I rebooted node A, I  noticed ipt_CLUSTERIP didn't fail over ?  I would
> have expected to see 1,2 in there on nodeB when nodeA failed
> 
> in fact when I reboot nodea it comes back with 2 in there ... that's not good !
> 
> 
> pcs resource show ybrpip-clone
>  Clone: ybrpip-clone
>   Meta Attrs: master-max=2 master-node-max=2 clone-max=2 clone-node-
> max=1 notify=true interleave=true
>   Resource: ybrpip (class=ocf provider=heartbeat type=IPaddr2)
>    Attributes: ip=10.172.214.50 cidr_netmask=24 nic=eth0
> clusterip_hash=sourceip-sourceport
>    Operations: start interval=0s timeout=60s (ybrpip-start-interval-0s)
>                monitor interval=5s timeout=20s (ybrpip-monitor-interval-5s)
>                stop interval=0s timeout=60s (ybrpip-stop-interval-0s)
> 
> pcs resource show ybrpip
>  Resource: ybrpip (class=ocf provider=heartbeat type=IPaddr2)
>   Attributes: ip=10.172.214.50 cidr_netmask=24 nic=eth0
> clusterip_hash=sourceip-sourceport
>   Operations: start interval=0s timeout=60s (ybrpip-start-interval-0s)
>               monitor interval=5s timeout=20s (ybrpip-monitor-interval-5s)
>               stop interval=0s timeout=60s (ybrpip-stop-interval-0s)
> 
> 
> 
> so  I think this has something todo with meta data..
> 
> 
> 
> I have another resource
> pcs resource create  ybrpstat ocf:yb:ybrp op monitor interval=5s
> 
> I want 2 of these one for nodeA and 1 for node B.
> 
> I want the IP address to be dependant on if this resource is available on the
> node.  How can I do that ?
> 
> Alex
> 
> 
> 
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org




More information about the Pacemaker mailing list