[Pacemaker] Resource colocation with a clone

Brice Figureau brice+ha at daysofwonder.com
Tue Aug 18 09:33:55 EDT 2009


On Tue, 2009-08-18 at 14:21 +0200, Andrew Beekhof wrote:
> On Mon, Aug 17, 2009 at 8:09 PM, Brice
> Figureau<brice-puppet at daysofwonder.com> wrote:
> > On 17/08/09 14:22, Andrew Beekhof wrote:
> >>
> >> On Thu, Aug 13, 2009 at 6:14 PM, Brice
> >> Figureau<brice+ha at daysofwonder.com> wrote:
> >>>
> >>> On Thu, 2009-08-13 at 14:00 +0200, Andrew Beekhof wrote:
> >>>>
> >>>> On Thu, Aug 13, 2009 at 1:31 PM, Brice
> >>>> Figureau<brice+ha at daysofwonder.com> wrote:
> >>>>
> >>>>> I was wondering if colocating with a clone would work,
> >>>>
> >>>> That allows the resource to keep running as long as at least one node
> >>>> has a copy of the clone running.
> >>>> Not sure if that helps in your scenario
> >>>
> >>> Are you sure?
> >>
> >> very
> >
> > Indeed this helps. What I'm not sure and can't find a definite answer about
> > is if the current clone resource running on the same node as the vip
> > (colocated with the clone) fails (ie it reaches migration-threshold), then
> > this colocated resource will move in another place where another member of
> > said clone still runs.
> 
> Naturally :-)
> If the instance fails it will be stopped and the colocation constraint
> will ensure the VIP is moved.
> 
> >
> > That, and my other question (in another thread here) about setting a score
> > << inf or >> -inf for colocated resource doesn't seem to work as advertised
> > (or I didn't understand it, which is well possible :-)).
> 
> What was the actual vs. expected behavior?

With the following (simple) configuration on a 2 node cluster:

primitive vip1 ocf:heartbeat:IPaddr2 \
        params ip="172.16.10.165" nic="eth1" cidr_netmask="24" \
        op monitor interval="10s"
primitive vip2 ocf:heartbeat:IPaddr2 \
        params ip="172.16.10.164" nic="eth1" cidr_netmask="24" \
        op monitor interval="10s"

There is absolutely no stickiness in this config.
vip1 and vip2 are spread one on each node automatically (which is nice).

If I standaby node1, vip1 moves on node2, as intended, and both vip1 and
vip2 run on node2.

Now, I add:

colocation vip_s -100: vip2 vip1

When I standby node1, vip1 moves to node2, and vip2 stops.
It acts exactly as if I used a -INFINITE colocation score.

I thought, using scores different INFINITE could be used to give some
hints to the CRM regarding placement of running resources. But in this
case, it acts as if it was a mandatory order.

What's wrong?
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/





More information about the Pacemaker mailing list