[Pacemaker] Clone resource dependency issue - undesired restart of dependent resources
Ron Kerry
rkerry at sgi.com
Mon Feb 28 19:33:08 UTC 2011
Folks -
I have a configuration issue that I am unsure how to resolve. Consider the following set of resources.
clone rsc1-clone rsc1 \
meta clone-max="2" target-role="Started"
primitive rsc1 ...
primitive rsc2 ... meta resource-stickiness="1"
primitive rsc3 ... meta resource-stickiness="1"
Plus the following constraints
colocation rsc2-with-clone inf: rsc2 rsc1-clone
colocation rsc3-with-clone inf: rsc3 rsc1-clone
order clone-before-rsc2 : rsc1-clone rsc2
order clone-before-rsc3 : rsc1-clone rsc3
I am getting the following behavior that is undesirable.
During normal operation, a copy of the rsc1 resource is running on my two systems with rs2 and rsc3
typically running split between the two systems. The rsc2 & rsc3 resources are operationally
dependent on a copy of rsc1 being up and running first.
SystemA SystemB
======= =======
rsc1 rsc1
rsc2 rsc3
If SystemB goes down, then rsc3 moves over to SystemA as expected
SystemA SystemB
======= =======
rsc1 X X
rsc2 X
rsc3 X X
When SystemB comes back into the cluster, crmd starts the rsc1 clone on SystemB but then also
restarts both rsc2 & rsc3. This means both are stopped and then both started again. This is not what
we want. We want these resources to remain running on SystemA until one of them is moved manually by
an administrator to re-balance them across the systems.
How do we configure these resources/constraints to achieve that behavior? We are already using
resource-stickiness, but that is meaningless if crmd is going to be doing a restart of these resources.
--
Ron Kerry rkerry at sgi.com
Global Product Support - SGI Federal
More information about the Pacemaker
mailing list