[Pacemaker] Migrate/run resource only if m/s resource is master/promoted on target node (follow-up)
hj lee
kerdosa at gmail.com
Thu Dec 31 02:44:44 UTC 2009
I found a few problems in your configuration.
First why do you set max-master=2? So both nodes are promoted and both nodes
are master. Then your colocation constraints does not make sense. Because
Hosting has to be started on both nodes. If this is not what you want, then
please remove max-master=2. I think the default is 1.
Second is your location constraint is not quite correct. It does not make
sense with colocation.
Location constraint: Please run Hosting in ibm1.
Colocation: Please run Hosting on drbd master node
The better way to handle this situation will be set location constraint to
drbd resource. Using role=master, you can say to Pacemaker which node is
preferred node to become drbd master. Once drbd becomes a master in one of
nodes, then Hosting will be started on that node based on colocation and
order constraint.
Please try these two fixes.
Thanks
hj
On Wed, Dec 30, 2009 at 7:37 AM, Martin Gombač <martin at isg.si> wrote:
> Hi,
>
> i've replaced /usr/lib/ocf/resource.d/linbit/drbd with version from git as
> you suggested. Lars, you can get a diff if you wish. I've also changed
> preference scores on all definitions.
> Resource Hosting (mounted on ms_drbd_r0/primary) still gets restarted if
> _anything_ happens to peer's drbd. By anything i mean changing state from
> primary to secondary and similar.
>
> >
> Dec 30 15:04:01 ibm1 pengine: [14902]: notice: LogActions: Restart resource
> Hosting (Started ibm1)
> Dec 30 15:04:01 ibm1 pengine: [14902]: notice: LogActions: Leave resource
> drbd_r0:0 (Master ibm1)
> Dec 30 15:04:01 ibm1 pengine: [14902]: notice: LogActions: Promote
> drbd_r0:1 (Slave -> Master ibm2)
> >
>
> Here is my current config as a whole. If you want i can attach .xml version
> too.
> >
> crm(live)# configure show
> node $id="3d430f49-b915-4d52-a32b-xxxx" ibm2
> node $id="4b2047c8-f3a0-4935-84a2-xxxx" ibm1
>
> primitive Hosting ocf:heartbeat:Xen \
> params xmfile="/etc/xen/Hosting.cfg" \
> meta target-role="Started" allow-migrate="true" is-managed="true" \
> op monitor interval="120s" timeout="300s"
> primitive drbd_r0 ocf:linbit:drbd \
> params drbd_resource="r0" \
> op monitor interval="15s" role="Master" timeout="30s" \
> op monitor interval="30s" role="Slave" timeout="30"
> ms ms_drbd_r0 drbd_r0 \
> meta notify="true" master-max="2" interleave="true" master-node-max="1"
> is-managed="true" resource-stickines="1000"
>
> location cli-prefer-Hosting Hosting \
> rule $id="cli-prefer-rule-Hosting" 10000: #uname eq ibm1
> colocation Hosting_on_ms_drbd_r0 10100: Hosting ms_drbd_r0:Master
> order ms_drbd_r0_b4_Hosting 10200: ms_drbd_r0:promote Hosting:start
>
> property $id="cib-bootstrap-options" \
> dc-version="1.0.6-f709c638237cdff7556cb6ab615f32826c0f8c06" \
> cluster-infrastructure="Heartbeat" \
> stonith-enabled="false" \
> no-quorum-policy="ignore" \
> default-resource-stickiness="1000" \
> last-lrm-refresh="1262179462"
> >
>
> I have no idea what else to check/set. If anyone can help me i'll
> appreciate it, if not i'll just remove drbd from cluster and rely only on
> fence-peer and after-resync-target drbd handlers. :-/ However I will google
> a bit, before i do that. :-)
>
> Regards,
> M.
>
>
>
> Lars Ellenberg wrote:
>
>>
>> Just to rule out some glitches we may have had with updating "master
>> scores"
>> in the ocf:linbit:drbd RA, please compare and try with the latest
>> version of that agent, available at
>>
>> http://git.drbd.org/?p=drbd-8.3.git;a=blob_plain;f=scripts/drbd.ocf;hb=HEAD
>>
>> Also, try to reduce your "preference" scores to some sane number,
>> and not use inf everywhere. give the crm some rope.
>>
>>
>>
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
--
Dream with longterm vision!
kerdosa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20091230/96cd3fd1/attachment-0002.htm>
More information about the Pacemaker
mailing list