[Pacemaker] Using "avoids" location constraint

Andrew Beekhof andrew at beekhof.net
Wed Jul 10 07:02:00 EDT 2013


On 09/07/2013, at 3:59 PM, Andrew Morgan <andrewjamesmorgan at gmail.com> wrote:

> 
> 
> 
> On 9 July 2013 04:11, Andrew Beekhof <andrew at beekhof.net> wrote:
> 
> On 08/07/2013, at 11:35 PM, Andrew Morgan <andrewjamesmorgan at gmail.com> wrote:
> 
> > Thanks Florian.
> >
> > The problem I have is that I'd like to define a HA configuration that isn't dependent on a specific set of fencing hardware (or any fencing hardware at all for that matter) and as the stack has the quorum capability included I'm hoping that this is an option.
> >
> > I've not been able to find any quorum commands within pcs; the closest I've found is setting a node to "standby" but when I do that, it appears to have lost its quorum vote
> 
> This is not the case.
> 
> My test was to have 3 nodes, node 3 defined as being on standby. My resources were running on node 2. I then dropped the network connection on node 2 hoping that node 1 and node 3 would maintain a quorum and that the resources would start on node 1 - instead the resources were stopped.

I'd like to see logs of that.  Because I'm having a really hard time believing it.

> 
> I have quorum enabled but on pcs status it says that the number of votes required is unknown - is there something else that I need to configure?

Something sounds very wrong with your cluster.

> 
>  
> 
> > - this seems at odds with the help text....
> >
> > standby <node>
> >         Put specified node into standby mode (the node specified will no longer be able to host resources
> >
> > Regards, Andrew.
> >
> >
> > On 8 July 2013 10:23, Florian Crouzat <gentoo at floriancrouzat.net> wrote:
> > Le 08/07/2013 09:49, Andrew Morgan a écrit :
> >
> > I'm attempting to implement a 3 node cluster where only 2 nodes are
> > there to actually run the services and the 3rd is there to form a quorum
> > (so that the cluster stays up when one of the 2 'workload' nodes fails).
> >
> > To this end, I added a location avoids contraint so that the services
> > (including drbd) don't get placed on the 3rd node (drbd3)...
> >
> > pcs constraint location ms_drbd avoids drbd3.localdomain
> >
> > the problem is that this constraint doesn't appear to be enforced and I
> > see failed actions where Pacemaker has attempted to start the services
> > on drbd3. In most cases I can just ignore the error but if I attempt to
> > migrate the services using "pcs move" then it causes a fatal startup
> > loop for drbd. If I migrate by adding an extra location contraint
> > preferring the other workload node then I can migrate ok.
> >
> > I'm using Oracle Linux 6.4; drbd83-utils 8.3.11; corosync 1.4.1; cman
> > 3.0.12.1; Pacemaker 1.1.8 & pcs 1.1.8
> >
> >
> > I'm no quorum-node expert but I believe your initial design isn't optimal.
> > You could probably even run with only two nodes (real nodes) and no-quorum-policy=ignore + fencing (for data integrity) [1]
> > This is what most (all?) people with two nodes clusters do.
> >
> > But if you really believe you need to be quorate, then I think you need to define your third node as quorum-node in corosync/cman (not sure how since EL6.4 and CMAN) and I cannot find a valid link. IIRC with such definition, you won't need the location constraints.
> >
> >
> > [1] http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html/Clusters_from_Scratch/_perform_a_failover.html#_quorum_and_two_node_clusters
> >
> >
> >
> > --
> > Cheers,
> > Florian Crouzat
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org





More information about the Pacemaker mailing list