[Pacemaker] resource stickyness
Bernd Schubert
bs_lists at aakef.fastmail.fm
Thu Nov 12 10:54:13 UTC 2009
Hello,
I try to prevent auto-migration back from mds2 to mds1, but somehow resource-
stickiness doesn't seem to work. After a failure of mds1 and takeover on mds2,
it still migrates back to mds1 when this system comes back
primitive MDT_HC3WORK ocf:ddn:lustre_server \
params device="/dev/vg_HC3WORK/mdt" directory="/lustre/HC3WORK/mdt" \
op start interval="0" timeout="700" \
op stop interval="0" timeout="600" \
op monitor interval="120" timeout="600" \
meta resource-stickiness="200" is-managed="true"
property $id="cib-bootstrap-options" \
default-resource-stickiness="200" \
no-quorum-policy="stop" \
dc-version="1.0.6-f709c638237cdff7556cb6ab615f32826c0f8c06" \
cluster-infrastructure="Heartbeat"
location location-MDT_HC3WORK.mds1 MDT_HC3WORK 100: mds1
location location-MDT_HC3WORK.mds2 MDT_HC3WORK 50: mds2
location location-MDT_HC3WORK.oss1 MDT_HC3WORK -inf: oss1
location location-MDT_HC3WORK.oss2 MDT_HC3WORK -inf: oss2
location location-MDT_HC3WORK.oss3 MDT_HC3WORK -inf: oss3
location location-MDT_HC3WORK.oss4 MDT_HC3WORK -inf: oss4
MDT-HC3WORK is also part of a resource group, but the resource group and also
all member do have the very same location constraints. Is is my mistake or a
bug?
Thanks,
Bernd
--
Bernd Schubert
DataDirect Networks
More information about the Pacemaker
mailing list