[Pacemaker] pacemaker resource constraints
Joe Healy
joehealy at gmail.com
Tue Mar 23 12:51:46 UTC 2010
Rather than expressing it directly, is it possible to create a resource
(maybe anything) that runs on failover to modify the configuration to make
the resource stick to the current node?
Cheers,
Jie
On Tue, Mar 23, 2010 at 11:44 PM, Dejan Muhamedagic <dejanmm at fastmail.fm>wrote:
> Hi,
>
> On Mon, Mar 22, 2010 at 01:18:35PM -0700, Alan Jones wrote:
> > Well, I guess my configuration is not as common.
> > In my case, one of these resources, say resource A, suffers greater
> > disruption if it is moved.
> > So, after a failover I would prefer that resource B move, reversing the
> node
> > placement.
> > Is this possible to express?
>
> Not sure, but don't think so.
>
> Thanks,
>
> Dejan
>
> > Alan
> >
> > On Mon, Mar 22, 2010 at 11:10 AM, Dejan Muhamedagic <dejanmm at fastmail.fm
> >wrote:
> >
> > > Hi,
> > >
> > > On Mon, Mar 22, 2010 at 09:29:50AM -0700, Alan Jones wrote:
> > > > Friends,
> > > > I have what should be a simple goal. Two resources to run on two
> nodes.
> > > > I'd like to configure them to run on separate nodes when available,
> ie.
> > > > active-active,
> > > > and provide for them to run together on either node when one fails,
> ie.
> > > > failover.
> > > > Up until this point I have assumed that this would be a base use case
> for
> > > > Pacemaker, however, it seems from the discussion on:
> > > > http://wiki.lustre.org/index.php/Using_Pacemaker_with_Lustre
> > > > ... that it is not (see below). Any ideas?
> > >
> > > Why not just two location constraints (aka node preferences):
> > >
> > > location l1 rsc1 100: node1
> > > location l2 rsc2 100: node2
> > >
> > > Thanks,
> > >
> > > Dejan
> > >
> > > > Alan
> > > >
> > > > *Note:* Use care when setting up your point system. You can use the
> > > > point system if your cluster has at least three nodes or if the
> resource
> > > > can acquire points from other constraints. However, in a system with
> > > > only two nodes and no way to acquire points, the constraint in the
> > > > example above will result in an inability to migrate a resource from
> a
> > > > failed node.
> > > >
> > > > The example they refer to is similar to yours:
> > > >
> > > > # crm configure colocation colresOST1resOST2 -100: resOST1 resOST2
> > >
> > > > _______________________________________________
> > > > Pacemaker mailing list
> > > > Pacemaker at oss.clusterlabs.org
> > > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> > >
> > >
> > > _______________________________________________
> > > Pacemaker mailing list
> > > Pacemaker at oss.clusterlabs.org
> > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> > >
>
> > _______________________________________________
> > Pacemaker mailing list
> > Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100323/d993283e/attachment-0002.htm>
More information about the Pacemaker
mailing list