[Pacemaker] help building 2 node config
Andrew Beekhof
andrew at beekhof.net
Tue Mar 18 03:01:54 UTC 2014
On 18 Mar 2014, at 1:36 pm, Alex Samad - Yieldbroker <Alex.Samad at yieldbroker.com> wrote:
> Hi
>
>
>
>> -----Original Message-----
>> From: Andrew Beekhof [mailto:andrew at beekhof.net]
>> Sent: Tuesday, 18 March 2014 11:51 AM
>> To: The Pacemaker cluster resource manager
>> Subject: Re: [Pacemaker] help building 2 node config
>>
>>
>> On 13 Mar 2014, at 4:13 pm, Alex Samad - Yieldbroker
>> <Alex.Samad at yieldbroker.com> wrote:
>>
> [snip]
>>
>>
>> ^^^ you only need the -clone versions of these constraints.
>> other than that its fine
>>
>>>
>>>
>>> Have I done anything silly ?
>>>
>>> Also as I don't have the application actually running on my nodes, I notice
>> fails occur very fast, more than 1 sec, where its that configured and how do I
>> configure it such that after 2 or 3,4 or 5 attempts it fails over to the other
>> node.
>>
>> Look for migration-threshold in the docs
>
> okay
>
>>
>>> I also want then resources to move back to the original nodes when
>>> they come back
>>
>> resource-stickiness=0
>
> Added as a meta
>
>>
>>>
> [snip]
>
>
> I have made some slight changes, firstly if I don't use the IP load balancing it works fine, how I expect, it's the floating IP and its has that's the problem
>
> pcs resource create ybrpip ocf:heartbeat:IPaddr2 params ip=10.32.21.20 cidr_netmask=24 nic=eth0 clusterip_hash=sourceip-sourceport \
> op start interval="0s" timeout="60s" \
> op monitor interval="5s" timeout="20s" \
> op stop interval="0s" timeout="60s" \
>
> pcs resource meta ybrpip stickiness=0
>
>
> pcs resource create ybrpstat ocf:yb:proxy op \
> op start interval="10s" timeout="60s" \
> op monitor interval="5s" timeout="20s" \
> op stop interval="10s" timeout="60s" \
>
> pcs resource meta ybrpstat stickiness=0
>
>
> # This runs on both boxes at the same time no issue
> pcs resource clone ybrpstat globally-unique=false clone-max=2 clone-node-max=1
> # has to be unique (works on ip hash)
> pcs resource clone ybrpip globally-unique=true clone-max=2 clone-node-max=2
>
>
> # So this I hope states that IP can't be started unless ybrpstat is okay
> pcs constraint colocation add ybrpip-clone ybrpstat-clone INFINITY
Yes. It can only run on the same machine(s) as ybrpstat-clone
>
> # this states stat must be started before ip
> pcs constraint order ybrpstat-clone then ybrpip-clone
>
> # not sure about this
> pcs constraint location ybrpip-clone prefers dc1devrp01
>
> I think the last line say that the clone of ybrpip prefers dc1devrp01,
yes, but for a clone it doesn't do much
> does that mean the other resource prefers the other node
>
> pcs status
> pcs status
> Cluster name: ybrp
> Last updated: Tue Mar 18 13:31:29 2014
> Last change: Tue Mar 18 13:26:51 2014 via cibadmin on alcdevrp01
> Stack: cman
> Current DC: dc1devrp01 - partition with quorum
> Version: 1.1.10-14.el6-368c726
> 2 Nodes configured
> 4 Resources configured
>
>
> Online: [ alcdevrp01 dc1devrp01 ]
>
> Full list of resources:
>
> Clone Set: ybrpstat-clone [ybrpstat]
> Started: [ alcdevrp01 dc1devrp01 ]
> Clone Set: ybrpip-clone [ybrpip] (unique)
> ybrpip:0 (ocf::heartbeat:IPaddr2): Started dc1devrp01
> ybrpip:1 (ocf::heartbeat:IPaddr2): Started dc1devrp01
>
> this is after I rebooted the alcdevrp01, the resources that were on it moved over to the dc1devrp01, but I want one of the ybrpip resources to move back to it. I guess another way of saying this is, can I place a constraint on an instance of 1 of the cloned resources ...
>
>
> So this is the result after reboot dc1devrp01
> pcs status
> Cluster name: ybrp
> Last updated: Tue Mar 18 13:32:08 2014
> Last change: Tue Mar 18 13:26:51 2014 via cibadmin on alcdevrp01
> Stack: cman
> Current DC: alcdevrp01 - partition with quorum
> Version: 1.1.10-14.el6-368c726
> 2 Nodes configured
> 4 Resources configured
>
>
> Online: [ alcdevrp01 dc1devrp01 ]
>
> Full list of resources:
>
> Clone Set: ybrpstat-clone [ybrpstat]
> Started: [ alcdevrp01 dc1devrp01 ]
> Clone Set: ybrpip-clone [ybrpip] (unique)
> ybrpip:0 (ocf::heartbeat:IPaddr2): Started alcdevrp01
> ybrpip:1 (ocf::heartbeat:IPaddr2): Started dc1devrp01
>
>
> the ybrpip resource rebalanced themselves, I am guessing because I have the constraint for location
it should do so anyway
>
> pcs config
> Cluster Name: ybrp
> Corosync Nodes:
>
> Pacemaker Nodes:
> alcdevrp01 dc1devrp01
>
> Resources:
> Clone: ybrpstat-clone
> Meta Attrs: globally-unique=false clone-max=2 clone-node-max=1
> Resource: ybrpstat (class=ocf provider=yb type=proxy)
> Meta Attrs: stickiness=0
> Operations: start interval=10s timeout=60s (ybrpstat-start-interval-10s)
> monitor interval=5s timeout=20s (ybrpstat-monitor-interval-5s)
> stop interval=10s timeout=60s (ybrpstat-stop-interval-10s)
> Clone: ybrpip-clone
> Meta Attrs: globally-unique=true clone-max=2 clone-node-max=2
> Resource: ybrpip (class=ocf provider=heartbeat type=IPaddr2)
> Attributes: ip=10.32.21.20 cidr_netmask=24 nic=eth0 clusterip_hash=sourceip-sourceport
> Meta Attrs: stickiness=0
> Operations: start interval=0s timeout=60s (ybrpip-start-interval-0s)
> monitor interval=5s timeout=20s (ybrpip-monitor-interval-5s)
> stop interval=0s timeout=60s (ybrpip-stop-interval-0s)
>
> Stonith Devices:
> Fencing Levels:
>
> Location Constraints:
> Resource: ybrpip-clone
> Enabled on: dc1devrp01 (score:INFINITY) (id:location-ybrpip-clone-dc1devrp01-INFINITY)
> Ordering Constraints:
> start ybrpstat-clone then start ybrpip-clone (Mandatory) (id:order-ybrpstat-clone-ybrpip-clone-mandatory)
> Colocation Constraints:
> ybrpip-clone with ybrpstat-clone (INFINITY) (id:colocation-ybrpip-clone-ybrpstat-clone-INFINITY)
>
> Cluster Properties:
> cluster-infrastructure: cman
> dc-version: 1.1.10-14.el6-368c726
> no-quorum-policy: ignore
> stonith-enabled: false
>
>
>
> If I have these constraints on location how do I manually move a resource from one node to another....
crm_resource --ban
> do I just move the node into standby mode ?
>
> A
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140318/9d5bdfe7/attachment-0004.sig>
More information about the Pacemaker
mailing list