[Pacemaker] Using "avoids" location constraint

Andrew Morgan andrewjamesmorgan at gmail.com
Mon Jul 8 09:35:09 EDT 2013


Thanks Florian.

The problem I have is that I'd like to define a HA configuration that isn't
dependent on a specific set of fencing hardware (or any fencing hardware at
all for that matter) and as the stack has the quorum capability included
I'm hoping that this is an option.

I've not been able to find any quorum commands within pcs; the closest I've
found is setting a node to "standby" but when I do that, it appears to have
lost its quorum vote - this seems at odds with the help text....

standby <node>
        Put specified node into standby mode (the node specified will no
longer be able to host resources

Regards, Andrew.


On 8 July 2013 10:23, Florian Crouzat <gentoo at floriancrouzat.net> wrote:

> Le 08/07/2013 09:49, Andrew Morgan a écrit :
>
>  I'm attempting to implement a 3 node cluster where only 2 nodes are
>> there to actually run the services and the 3rd is there to form a quorum
>> (so that the cluster stays up when one of the 2 'workload' nodes fails).
>>
>> To this end, I added a location avoids contraint so that the services
>> (including drbd) don't get placed on the 3rd node (drbd3)...
>>
>> pcs constraint location ms_drbd avoids drbd3.localdomain
>>
>> the problem is that this constraint doesn't appear to be enforced and I
>> see failed actions where Pacemaker has attempted to start the services
>> on drbd3. In most cases I can just ignore the error but if I attempt to
>> migrate the services using "pcs move" then it causes a fatal startup
>> loop for drbd. If I migrate by adding an extra location contraint
>> preferring the other workload node then I can migrate ok.
>>
>> I'm using Oracle Linux 6.4; drbd83-utils 8.3.11; corosync 1.4.1; cman
>> 3.0.12.1; Pacemaker 1.1.8 & pcs 1.1.8
>>
>>
> I'm no quorum-node expert but I believe your initial design isn't optimal.
> You could probably even run with only two nodes (real nodes) and
> no-quorum-policy=ignore + fencing (for data integrity) [1]
> This is what most (all?) people with two nodes clusters do.
>
> But if you really believe you need to be quorate, then I think you need to
> define your third node as quorum-node in corosync/cman (not sure how since
> EL6.4 and CMAN) and I cannot find a valid link. IIRC with such definition,
> you won't need the location constraints.
>
>
> [1] http://clusterlabs.org/doc/en-**US/Pacemaker/1.1-plugin/html/**
> Clusters_from_Scratch/_**perform_a_failover.html#_**
> quorum_and_two_node_clusters<http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html/Clusters_from_Scratch/_perform_a_failover.html#_quorum_and_two_node_clusters>
>
>
>
> --
> Cheers,
> Florian Crouzat
>
> ______________________________**_________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/**mailman/listinfo/pacemaker<http://oss.clusterlabs.org/mailman/listinfo/pacemaker>
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/**doc/Cluster_from_Scratch.pdf<http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20130708/fadc4c74/attachment-0003.html>


More information about the Pacemaker mailing list