[Pacemaker] pacemaker resource constraints

Alan Jones falancluster at gmail.com
Tue Mar 23 18:47:10 EDT 2010


The following rules give me the behavior I was looking for:

primitive master ocf:pacemaker:Dummy meta resource-stickiness="INFINITY"
is-managed="true"
location l-master_a master 1: fc12-a
location l-master_b master 1: fc12-b
primitive master ocf:pacemaker:Dummy
location l-worker_a worker 1: fc12-a
location l-worker_b worker 1: fc12-b
colocation colo-master_worker -1: worker master

To recap, the goal is an active-active two node cluster were "master" is
sticky and "master" and "worker" with anti-colocate when possible for
performance.
Note that I had to add points for each resource on each node to overcome the
negative colocation value to allow them both to run on one node.
If there is a more elegant solution, let me know.
Alan

On Tue, Mar 23, 2010 at 8:24 AM, Andrew Beekhof <andrew at beekhof.net> wrote:

> On Mon, Mar 22, 2010 at 9:18 PM, Alan Jones <falancluster at gmail.com>
> wrote:
> > Well, I guess my configuration is not as common.
> > In my case, one of these resources, say resource A, suffers greater
> > disruption if it is moved.
> > So, after a failover I would prefer that resource B move, reversing the
> node
> > placement.
> > Is this possible to express?
>
> Make A stickier than B.
>
> Please google for the following keywords:
>    site:clusterlabs.org resource-stickiness
>
> > Alan
> >
> > On Mon, Mar 22, 2010 at 11:10 AM, Dejan Muhamedagic <dejanmm at fastmail.fm
> >
> > wrote:
> >>
> >> Hi,
> >>
> >> On Mon, Mar 22, 2010 at 09:29:50AM -0700, Alan Jones wrote:
> >> > Friends,
> >> > I have what should be a simple goal.  Two resources to run on two
> nodes.
> >> > I'd like to configure them to run on separate nodes when available,
> ie.
> >> > active-active,
> >> > and provide for them to run together on either node when one fails,
> ie.
> >> > failover.
> >> > Up until this point I have assumed that this would be a base use case
> >> > for
> >> > Pacemaker, however, it seems from the discussion on:
> >> > http://wiki.lustre.org/index.php/Using_Pacemaker_with_Lustre
> >> > ... that it is not (see below).  Any ideas?
> >>
> >> Why not just two location constraints (aka node preferences):
> >>
> >> location l1 rsc1 100: node1
> >> location l2 rsc2 100: node2
> >>
> >> Thanks,
> >>
> >> Dejan
> >>
> >> > Alan
> >> >
> >> > *Note:* Use care when setting up your point system. You can use the
> >> > point system if your cluster has at least three nodes or if the
> resource
> >> > can acquire points from other constraints. However, in a system with
> >> > only two nodes and no way to acquire points, the constraint in the
> >> > example above will result in an inability to migrate a resource from a
> >> > failed node.
> >> >
> >> > The example they refer to is similar to yours:
> >> >
> >> > # crm configure colocation colresOST1resOST2 -100: resOST1 resOST2
> >>
> >> > _______________________________________________
> >> > Pacemaker mailing list
> >> > Pacemaker at oss.clusterlabs.org
> >> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >>
> >>
> >> _______________________________________________
> >> Pacemaker mailing list
> >> Pacemaker at oss.clusterlabs.org
> >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> >
> > _______________________________________________
> > Pacemaker mailing list
> > Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> >
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100323/77ed0c14/attachment.html>


More information about the Pacemaker mailing list