[Pacemaker] help building 2 node config
Alex Samad - Yieldbroker
Alex.Samad at yieldbroker.com
Tue Mar 18 06:25:31 UTC 2014
Hi
Given up on the clone resources, seems like there is no way to place the clones onto nodes, so I have fallen back onto this
# Create ybrp ip address
pcs resource create ybrpip ocf:heartbeat:IPaddr2 params ip=10.32.21.20 cidr_netmask=24 nic=eth0 clusterip_hash=sourceip-sourceport \
op start interval="0s" timeout="60s" on-fail=restart \
op monitor interval="5s" timeout="20s" on-fail=restart \
op stop interval="0s" timeout="60s" \
pcs resource meta ybrpip stickiness=100,migration-threshold=3,failure-timeout=600s
# Create status
pcs resource create ybrpstat ocf:yb:proxy op \
op start interval="10s" timeout="60s" on-fail=restart \
op monitor interval="5s" timeout="20s" on-fail=restart \
op stop interval="10s" timeout="60s" \
pcs resource meta ybrpstat stickiness=100
# clone it it
pcs resource clone ybrpstat globally-unique=false clone-max=2 clone-node-max=1
pcs constraint colocation add ybrpip ybrpstat-clone INFINITY
pcs constraint order ybrpstat-clone then ybrpip
Questions on this one are
If a service fails a monitor will the above make pacemaker restart the service and check again ie recoverable error ?
A
> -----Original Message-----
> From: Alex Samad - Yieldbroker [mailto:Alex.Samad at yieldbroker.com]
> Sent: Tuesday, 18 March 2014 3:18 PM
> To: The Pacemaker cluster resource manager
> Subject: Re: [Pacemaker] help building 2 node config
>
>
>
> > -----Original Message-----
> > From: Andrew Beekhof [mailto:andrew at beekhof.net]
> > Sent: Tuesday, 18 March 2014 2:02 PM
> > To: The Pacemaker cluster resource manager
> > Subject: Re: [Pacemaker] help building 2 node config
> >
> >
> > On 18 Mar 2014, at 1:36 pm, Alex Samad - Yieldbroker
> > <Alex.Samad at yieldbroker.com> wrote:
> >
> [snip]
>
> > >
> > > pcs status
> > > pcs status
> > > Cluster name: ybrp
> > > Last updated: Tue Mar 18 13:31:29 2014 Last change: Tue Mar 18
> > > 13:26:51 2014 via cibadmin on alcdevrp01
> > > Stack: cman
> > > Current DC: dc1devrp01 - partition with quorum
> > > Version: 1.1.10-14.el6-368c726
> > > 2 Nodes configured
> > > 4 Resources configured
> > >
> > >
> > > Online: [ alcdevrp01 dc1devrp01 ]
> > >
> > > Full list of resources:
> > >
> > > Clone Set: ybrpstat-clone [ybrpstat]
> > > Started: [ alcdevrp01 dc1devrp01 ] Clone Set: ybrpip-clone
> > > [ybrpip] (unique)
> > > ybrpip:0 (ocf::heartbeat:IPaddr2): Started dc1devrp01
> > > ybrpip:1 (ocf::heartbeat:IPaddr2): Started dc1devrp01
> > >
> > > this is after I rebooted the alcdevrp01, the resources that were on
> > > it moved
> > over to the dc1devrp01, but I want one of the ybrpip resources to move
> > back to it. I guess another way of saying this is, can I place a
> > constraint on an instance of 1 of the cloned resources ...
>
> So why didn't the resources get rebalanced here ?
>
>
> > >
> > >
> > > So this is the result after reboot dc1devrp01 pcs status Cluster name:
> > > ybrp Last updated: Tue Mar 18 13:32:08 2014 Last change: Tue Mar 18
> > > 13:26:51 2014 via cibadmin on alcdevrp01
> > > Stack: cman
> > > Current DC: alcdevrp01 - partition with quorum
> > > Version: 1.1.10-14.el6-368c726
> > > 2 Nodes configured
> > > 4 Resources configured
> > >
> > >
> > > Online: [ alcdevrp01 dc1devrp01 ]
> > >
> > > Full list of resources:
> > >
> > > Clone Set: ybrpstat-clone [ybrpstat]
> > > Started: [ alcdevrp01 dc1devrp01 ] Clone Set: ybrpip-clone
> > > [ybrpip] (unique)
> > > ybrpip:0 (ocf::heartbeat:IPaddr2): Started alcdevrp01
> > > ybrpip:1 (ocf::heartbeat:IPaddr2): Started dc1devrp01
> > >
> > >
> > > the ybrpip resource rebalanced themselves, I am guessing because I
> > > have the constraint for location
> >
> > it should do so anyway
> >
> [snip]
>
> > >
> > > If I have these constraints on location how do I manually move a
> > > resource
> > from one node to another....
> >
> > crm_resource --ban
>
>
> # current status
>
> pcs status
> Cluster name: ybrp
> Last updated: Tue Mar 18 15:08:36 2014
> Last change: Tue Mar 18 15:08:09 2014 via crm_resource on alcdevrp01
> Stack: cman
> Current DC: alcdevrp01 - partition with quorum
> Version: 1.1.10-14.el6-368c726
> 2 Nodes configured
> 4 Resources configured
>
>
> Online: [ alcdevrp01 dc1devrp01 ]
>
> Full list of resources:
>
> Clone Set: ybrpstat-clone [ybrpstat]
> Started: [ alcdevrp01 dc1devrp01 ]
> Clone Set: ybrpip-clone [ybrpip] (unique)
> ybrpip:0 (ocf::heartbeat:IPaddr2): Started alcdevrp01
> ybrpip:1 (ocf::heartbeat:IPaddr2): Started alcdevrp01
>
>
> # move it
> crm_resource --resource ybrpip-clone --move
>
>
> Clone Set: ybrpip-clone [ybrpip] (unique)
> ybrpip:0 (ocf::heartbeat:IPaddr2): Started dc1devrp01
> ybrpip:1 (ocf::heartbeat:IPaddr2): Started dc1devrp01
>
> # remove constraint
> crm_resource --resource ybrpip-clone --clear
>
> Clone Set: ybrpip-clone [ybrpip] (unique)
> ybrpip:0 (ocf::heartbeat:IPaddr2): Started dc1devrp01
> ybrpip:1 (ocf::heartbeat:IPaddr2): Started dc1devrp01
>
> #still doesn't balance out. How do I move just 1 of the clones !
>
>
>
>
> >
> > > do I just move the node into standby mode ?
> > >
> > > A
> [snip]
>
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Pacemaker
mailing list