[Pacemaker] location constraint question

Pavlos Parissis pavlos.parissis at gmail.com
Mon Sep 20 09:53:54 EDT 2010


Hi,
I am having problems to understand why my DRBD ms resource wants a
location constraint. My setup is quite simple
3 nodes
2 resource groups which hold ip,fs and the dymmy resources
2 resources for 2 drbd
2 master/slave resource for 2 DRBD.

The objective is to have pbx_service_01 to use as primary node-1 and
secondary node-03, and for pbx_service_02 to use a primary node-2 and
secondary node-03.
So, a N+1 architecture. Having the configuration [1] everything works
as I want [2]. But, I found a comment from Lars Ellenberg [3] which
basically says to use location constraint on ms DRBD.
So, I deleted the  PrimaryNode-drbd_01 and SecondaryNode-drbd_01
location constraints just to see the impact only 1 of the 2 resource
group.
I noticed that only ip_01 is started from  pbx_service_01 resource
group and not the fs and pbx_01 (pbx_01 no starting is normal because
the order constraint ).
I thought that since I have a location constraint for the resource
group will be enough.
What have I understood incorrectly?

BTW, why does crm_mon report only 4 resource?

Thanks,
Pavlos




[1]
[root at node-01 ~]# crm configure show
node $id="b8ad13a6-8a6e-4304-a4a1-8f69fa735100" node-02
node $id="d5557037-cf8f-49b7-95f5-c264927a0c76" node-01
node $id="e5195d6b-ed14-4bb3-92d3-9105543f9251" node-03
primitive drbd_01 ocf:linbit:drbd \
        params drbd_resource="drbd_pbx_service_1" \
        op monitor interval="30s"
primitive drbd_02 ocf:linbit:drbd \
        params drbd_resource="drbd_pbx_service_2" \
        op monitor interval="30s"
primitive fs_01 ocf:heartbeat:Filesystem \
        params device="/dev/drbd1" directory="/pbx_service_01" fstype="ext3"
primitive fs_02 ocf:heartbeat:Filesystem \
        params device="/dev/drbd2" directory="/pbx_service_02" fstype="ext3"
primitive ip_01 ocf:heartbeat:IPaddr2 \
        params ip="10.10.10.10" cidr_netmask="28" broadcast="10.10.10.127" \
        op monitor interval="5s"
primitive ip_02 ocf:heartbeat:IPaddr2 \
        params ip="10.10.10.11" cidr_netmask="28" broadcast="10.10.10.127" \
        op monitor interval="5s"
primitive pbx_01 ocf:heartbeat:Dummy \
        params state="/pbx_service_01/Dummy.state"
primitive pbx_02 ocf:heartbeat:Dummy \
        params state="/pbx_service_02/Dummy.state"
group pbx_service_01 ip_01 fs_01 pbx_01
group pbx_service_02 ip_02 fs_02 pbx_02
ms ms-drbd_01 drbd_01 \
        meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
ms ms-drbd_02 drbd_02 \
        meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
location PrimaryNode-drbd_01 ms-drbd_01 100: node-01
location PrimaryNode-drbd_02 ms-drbd_02 100: node-02
location PrimaryNode-pbx_service_01 pbx_service_01 200: node-01
location PrimaryNode-pbx_service_02 pbx_service_02 200: node-02
location SecondaryNode-drbd_01 ms-drbd_01 0: node-03
location SecondaryNode-drbd_02 ms-drbd_02 0: node-03
location SecondaryNode-pbx_service_01 pbx_service_01 10: node-03
location SecondaryNode-pbx_service_02 pbx_service_02 10: node-03
colocation fs-on-drbd_01 inf: fs_01 ms-drbd_01:Master
colocation fs-on-drbd_02 inf: fs_02 ms-drbd_02:Master
colocation pbx_01-with-fs_01 inf: pbx_01 fs_01
colocation pbx_01-with-ip_01 inf: pbx_01 ip_01
colocation pbx_02-with-fs_02 inf: pbx_02 fs_02
colocation pbx_02-with-ip_02 inf: pbx_02 ip_02
order fs_01-after-drbd_01 inf: ms-drbd_01:promote fs_01:start
order fs_02-after-drbd_02 inf: ms-drbd_02:promote fs_02:start
order pbx_01-after-fs_01 inf: fs_01 pbx_01
order pbx_01-after-ip_01 inf: ip_01 pbx_01
order pbx_02-after-fs_02 inf: fs_02 pbx_02
order pbx_02-after-ip_02 inf: ip_02 pbx_02
property $id="cib-bootstrap-options" \
        dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \
        cluster-infrastructure="Heartbeat" \
        stonith-enabled="false" \
        symmetric-cluster="false"
rsc_defaults $id="rsc-options" \
        resource-stickiness="1000"




[2]
[root at node-03 ~]# crm_mon -1
============
Last updated: Mon Sep 20 15:36:46 2010
Stack: Heartbeat
Current DC: node-03 (e5195d6b-ed14-4bb3-92d3-9105543f9251) - partition
with quorum
Version: 1.0.9-89bd754939df5150de7cd76835f98fe90851b677
3 Nodes configured, unknown expected votes
4 Resources configured.
============

Online: [ node-03 node-01 node-02 ]

 Resource Group: pbx_service_01
     ip_01      (ocf::heartbeat:IPaddr2):       Started node-01
     fs_01      (ocf::heartbeat:Filesystem):    Started node-01
     pbx_01     (ocf::heartbeat:Dummy): Started node-01
 Resource Group: pbx_service_02
     ip_02      (ocf::heartbeat:IPaddr2):       Started node-02
     fs_02      (ocf::heartbeat:Filesystem):    Started node-02
     pbx_02     (ocf::heartbeat:Dummy): Started node-02
 Master/Slave Set: ms-drbd_01
     Masters: [ node-01 ]
     Slaves: [ node-03 ]
 Master/Slave Set: ms-drbd_02
     Masters: [ node-02 ]
     Slaves: [ node-03 ]

[3] http://www.mail-archive.com/pacemaker@oss.clusterlabs.org/msg04105.html

not have location preference constraints on the master role directly,
or give them a very low score. Recommended would be to place a
location preference, if needed, not on DRBD Master role, but on some
depending service (Filesystem for example)

http://www.mail-archive.com/pacemaker@oss.clusterlabs.org/msg04105.html




More information about the Pacemaker mailing list