[Pacemaker] Resource colocation with a clone

Andrew Beekhof andrew at beekhof.net
Tue Aug 18 10:38:28 EDT 2009


On Tue, Aug 18, 2009 at 3:33 PM, Brice
Figureau<brice+ha at daysofwonder.com> wrote:
> On Tue, 2009-08-18 at 14:21 +0200, Andrew Beekhof wrote:
>> What was the actual vs. expected behavior?
>
> With the following (simple) configuration on a 2 node cluster:
>
> primitive vip1 ocf:heartbeat:IPaddr2 \
>        params ip="172.16.10.165" nic="eth1" cidr_netmask="24" \
>        op monitor interval="10s"
> primitive vip2 ocf:heartbeat:IPaddr2 \
>        params ip="172.16.10.164" nic="eth1" cidr_netmask="24" \
>        op monitor interval="10s"
>
> There is absolutely no stickiness in this config.
> vip1 and vip2 are spread one on each node automatically (which is nice).
>
> If I standaby node1, vip1 moves on node2, as intended, and both vip1 and
> vip2 run on node2.
>
> Now, I add:
>
> colocation vip_s -100: vip2 vip1
>
> When I standby node1, vip1 moves to node2, and vip2 stops.
> It acts exactly as if I used a -INFINITE colocation score.
>
> I thought, using scores different INFINITE could be used to give some
> hints to the CRM regarding placement of running resources. But in this
> case, it acts as if it was a mandatory order.
>
> What's wrong?

There was no positive preference to begin with.
As soon as a node score goes negative for a resource, its not allowed
to host it.

Had you given nodes a starting score of 200 for each resource, then it
would have worked as you expected.
(One day I need to make the node's starting score configurable)




More information about the Pacemaker mailing list