[Pacemaker] help building 2 node config
Alex Samad - Yieldbroker
Alex.Samad at yieldbroker.com
Wed Mar 19 00:00:35 UTC 2014
> -----Original Message-----
> From: Andrew Beekhof [mailto:andrew at beekhof.net]
> Sent: Wednesday, 19 March 2014 10:24 AM
> To: The Pacemaker cluster resource manager
> Subject: Re: [Pacemaker] help building 2 node config
>
>
> On 18 Mar 2014, at 3:17 pm, Alex Samad - Yieldbroker
> <Alex.Samad at yieldbroker.com> wrote:
[snip]
> > Full list of resources:
> >
> > Clone Set: ybrpstat-clone [ybrpstat]
> > Started: [ alcdevrp01 dc1devrp01 ] Clone Set: ybrpip-clone
> > [ybrpip] (unique)
> > ybrpip:0 (ocf::heartbeat:IPaddr2): Started alcdevrp01
> > ybrpip:1 (ocf::heartbeat:IPaddr2): Started alcdevrp01
> >
> >
> > # move it
> > crm_resource --resource ybrpip-clone --move
> >
> >
> > Clone Set: ybrpip-clone [ybrpip] (unique)
> > ybrpip:0 (ocf::heartbeat:IPaddr2): Started dc1devrp01
> > ybrpip:1 (ocf::heartbeat:IPaddr2): Started dc1devrp01
> >
> > # remove constraint
> > crm_resource --resource ybrpip-clone --clear
> >
> > Clone Set: ybrpip-clone [ybrpip] (unique)
> > ybrpip:0 (ocf::heartbeat:IPaddr2): Started dc1devrp01
> > ybrpip:1 (ocf::heartbeat:IPaddr2): Started dc1devrp01
> >
> > #still doesn't balance out. How do I move just 1 of the clones !
>
> Can you send me the result of cibadmin -Ql when the cluster is in this state?
I thought I would paste to paste bin
http://pastebin.com/yyMh9yhe <<< this is with them all on devrp1 after rebooting devrp2
# config
pcs config
Cluster Name: ybrp
Corosync Nodes:
Pacemaker Nodes:
devrp1 devrp2
Resources:
Clone: ybrpip-clone
Meta Attrs: globally-unique=true clone-max=2 clone-node-max=2
Resource: ybrpip (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=10.172.214.50 cidr_netmask=24 nic=eth0 clusterip_hash=sourceip-sourceport
Operations: start interval=0s timeout=60s (ybrpip-start-interval-0s)
monitor interval=5s timeout=20s (ybrpip-monitor-interval-5s)
stop interval=0s timeout=60s (ybrpip-stop-interval-0s)
Clone: ybrpstat-clone
Meta Attrs: globally-unique=false clone-max=2 clone-node-max=1
Resource: ybrpstat (class=ocf provider=yb type=ybrp)
Operations: start interval=10s timeout=60s (ybrpstat-start-interval-10s)
monitor interval=5s timeout=20s (ybrpstat-monitor-interval-5s)
stop interval=10s timeout=60s (ybrpstat-stop-interval-10s)
Stonith Devices:
Fencing Levels:
Location Constraints:
Ordering Constraints:
Colocation Constraints:
Cluster Properties:
cluster-infrastructure: cman
dc-version: 1.1.10-14.el6-368c726
last-lrm-refresh: 1394682724
no-quorum-policy: ignore
stonith-enabled: false
# status
pcs status
Cluster name: ybrp
Last updated: Wed Mar 19 10:57:41 2014
Last change: Mon Mar 17 13:30:16 2014 via cibadmin on devrp1
Stack: cman
Current DC: devrp1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured
4 Resources configured
Online: [ devrp1 devrp2 ]
Full list of resources:
Clone Set: ybrpip-clone [ybrpip] (unique)
ybrpip:0 (ocf::heartbeat:IPaddr2): Started devrp1
ybrpip:1 (ocf::heartbeat:IPaddr2): Started devrp1
Clone Set: ybrpstat-clone [ybrpstat]
Started: [ devrp1 devrp2 ]
It was balanced and then I rebooted devrp2 and it stayed like this
pacemaker-1.1.10-14.el6.x86_64
pacemaker-libs-1.1.10-14.el6.x86_64
pacemaker-cli-1.1.10-14.el6.x86_64
pacemaker-cluster-libs-1.1.10-14.el6.x86_64
This is what happened after I rebooted devrp1
# status
pcs status
Cluster name: ybrp
Last updated: Wed Mar 19 10:59:29 2014
Last change: Mon Mar 17 13:30:16 2014 via cibadmin on devrp1
Stack: cman
Current DC: devrp2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured
4 Resources configured
Online: [ devrp1 devrp2 ]
Full list of resources:
Clone Set: ybrpip-clone [ybrpip] (unique)
ybrpip:0 (ocf::heartbeat:IPaddr2): Started devrp2
ybrpip:1 (ocf::heartbeat:IPaddr2): Started devrp2
Clone Set: ybrpstat-clone [ybrpstat]
Started: [ devrp1 devrp2 ]
[snip]
More information about the Pacemaker
mailing list