[Pacemaker] N+1 and equal priority resource groups
Andrew Beekhof
andrew at beekhof.net
Tue Apr 22 05:10:20 UTC 2014
On 17 Apr 2014, at 8:36 am, Igal Baevsky <ibaevsky at marketfactory.com> wrote:
> Hi All,
>
> GOAL: Here is the scenario I'm trying to achieve:
>
> * N+1 asymmetric cluster - I define what nodes I want my resource groups to
> run on.
> * Equal priority resource groups(colocation and order).
> No active resource group should move or go down to make space for another
> resource group.
> * No two resource groups can share a node at one point of time!
> * No "fallback" when primary node comes back after failure.
> "Recovered" node should become the backup for the rest of the active
> groups.
>
> PROBLEM: When "nyc02esx04.mf"(the "+1" node) is down and one of the active
> nodes if offline,
> it looks like the order of resource groups in "anti-Cluster" colocation
> is the deciding factor in what group remains active.
Correct.
Given colocate(A, B, -inf), in order to find out where A can go, we need to know where B is (going to go).
Even if we made it so that was no longer the case (which at a stretch might even be possible after all these years), there is still an implicit ordering from their order in the configuration.
Unfortunately the human brain is still significantly more intelligent than Pacemaker.
What seems obvious to us is not at all obvious to it.
Can all three groups run on all nodes? Or just its own and the shared backup?
> How do I achieve a scenario where I have a -INF colocation constrain
> without prioritizing one group against the other.
> I tried colocation anti-Cluster -inf: (W-Cluster X-Cluster Y-Cluster),
> but this is completely wrong.
>
>
>
> CONFIG: In the configuration below I have 3 resource groups: W-Cluster, Y-
> Cluster and X-Cluster and 4 identical nodes.
> (I'm omitting Stonith configuration from the post.)
>
>
> node $id="174351461" nyc02esx01.mf
> node $id="174351462" nyc02esx02.mf
> node $id="174351463" nyc02esx03.mf
> node $id="174351474" nyc02esx04.mf
>
> primitive W-Cluster-Global-Partition ocf:heartbeat:Filesystem \
> params device="/dev/disk/by-uuid/64e795ab-b239-4cf4-a027-
> 8e6ecafbfe2f" directory="/app/global" fstype="ext4" run_fsck="force" \
> meta is-managed="true" target-role="Started"
> primitive W-Cluster-ManagementIP ocf:heartbeat:IPaddr2 \
> params ip="10.100.100.110" cidr_netmask="24" \
> op monitor interval="1s" \
> meta target-role="Started"
> primitive W-Cluster-iSCSI-LUN ocf:heartbeat:iscsi \
> params portal="10.1.60.20" target="iqn.2001-05.com.equallogic:0-
> 8a0906-4ecb24703-e9a8f08159f50a56-w-cluster" \
> meta target-role="Started"
>
> primitive X-Cluster-Global-Partition ocf:heartbeat:Filesystem \
> params device="/dev/disk/by-uuid/ce1cdbf3-c48b-44d2-b99a-
> 2cbd607338e0" directory="/app/global" fstype="ext4" run_fsck="force" \
> meta is-managed="true" target-role="Started"
> primitive X-Cluster-ManagementIP ocf:heartbeat:IPaddr2 \
> params ip="10.100.100.112" cidr_netmask="24" \
> op monitor interval="1s" \
> meta target-role="Started"
> primitive X-Cluster-iSCSI-LUN ocf:heartbeat:iscsi \
> params portal="10.1.60.20" target="iqn.2001-05.com.equallogic:0-
> 8a0906-b3cb24703-62a8f0815a250aaa-x-cluster" \
> meta target-role="Started"
>
> primitive Y-Cluster-Global-Partition ocf:heartbeat:Filesystem \
> params device="/dev/disk/by-uuid/26aaaea4-d249-45d5-b0e0-
> fb097e5262e9" directory="/app/global" fstype="ext4" run_fsck="force" \
> meta is-managed="true" target-role="Started"
> primitive Y-Cluster-ManagementIP ocf:heartbeat:IPaddr2 \
> params ip="10.100.100.113" cidr_netmask="24" \
> op monitor interval="1s" \
> meta target-role="Started"
> primitive Y-Cluster-iSCSI-LUN ocf:heartbeat:iscsi \
> params portal="10.1.60.20" target="iqn.2001-05.com.equallogic:0-
> 8a0906-2beb24703-5308f0815b05181d-y-cluster" \
> meta target-role="Started"
>
>
> group W-Cluster W-Cluster-ManagementIP W-Cluster-iSCSI-LUN W-Cluster-Global-
> Partition \
> meta target-role="Started" \
> meta resource-stickiness="101"
> group X-Cluster X-Cluster-ManagementIP X-Cluster-iSCSI-LUN X-Cluster-Global-
> Partition \
> meta target-role="Started" \
> meta resource-stickiness="101"
> group Y-Cluster Y-Cluster-ManagementIP Y-Cluster-iSCSI-LUN Y-Cluster-Global-
> Partition \
> meta target-role="Started" \
> meta resource-stickiness="101"
>
> location W-Cluster-nyc02esx01 W-Cluster 100: nyc02esx01.mf
> location W-Cluster-nyc02esx02 W-Cluster 50: nyc02esx02.mf
> location W-Cluster-nyc02esx03 W-Cluster 50: nyc02esx03.mf
> location W-Cluster-nyc02esx04 W-Cluster 0: nyc02esx04.mf
> location X-Cluster-nyc02esx01 X-Cluster 50: nyc02esx01.mf
> location X-Cluster-nyc02esx02 X-Cluster 100: nyc02esx02.mf
> location X-Cluster-nyc02esx03 X-Cluster 50: nyc02esx03.mf
> location X-Cluster-nyc02esx04 X-Cluster 0: nyc02esx04.mf
> location Y-Cluster-nyc02esx01 Y-Cluster 50: nyc02esx01.mf
> location Y-Cluster-nyc02esx02 Y-Cluster 50: nyc02esx02.mf
> location Y-Cluster-nyc02esx03 Y-Cluster 100: nyc02esx03.mf
> location Y-Cluster-nyc02esx04 Y-Cluster 0: nyc02esx04.mf
>
> colocation anti-Cluster -inf: W-Cluster X-Cluster Y-Cluster
>
> property $id="cib-bootstrap-options" \
> dc-version="1.1.12-1.el6-d9fbba5" \
> cluster-infrastructure="corosync" \
> stonith-enabled="true" \
> symmetric-cluster="false" \
> last-lrm-refresh="1397685792"
> rsc_defaults $id="rsc-options" \
> resource-stickiness="1"
>
>
>
> Thanks!
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140422/c74be154/attachment-0004.sig>
More information about the Pacemaker
mailing list