[Pacemaker] One more globally-unique clone question
Vladislav Bogdanov
bubble at hoster-ok.com
Fri Jan 16 15:25:47 CET 2015
Hi all,
Trying to reproduce problem with early stop of globally-unique clone
instances during move to another node I found one more "interesting"
problem.
Due to the different order of resources in the CIB and extensive use of
constraints between other resources (odd number of resources
cluster-wide) two CLUSTERIP instances are always allocated to the same
node in the new testing cluster.
What would be the best/preferred way to make them run on different nodes
by default?
I see following options:
* Raise priority of globally-unique clone so its instances are always
allocated first of all.
* Use utilization attributes (with high values for nodes and low values
for cluster resources).
* Anything else?
If I configure virtual IPs one-by-one (without clone), I can add a
colocation constraint with negative score between them. I do not see a
way to scale that setup well though (5-10 IPs).
So, what would be the best option to achieve the same with
globally-unique cloned resource?
May be there should be some internal preference/colocation not to place
them together (like default stickiness=1 for clones)?
Or even allow special negative colocation constraint and the same
resource in both 'what' and 'with'
(colocation col1 -1: clone clone)?
Best,
Vladislav
More information about the Pacemaker
mailing list