[Pacemaker] problem with pacemaker and stonith resources

Dejan Muhamedagic dejanmm at fastmail.fm
Wed May 26 06:41:28 EDT 2010


Hi,

On Wed, May 26, 2010 at 12:37:51AM +0200, Matteo wrote:
> Hi folks,
> I'm running a 2 node cluster with pacemaker, DRBD dual primary, ocfs2.
> Now I'm trying to setup stonith correctly, but my stonith resources don't
> start. I did some research but I didn't find a solution to my problem.
> 
> This is my cib:
> 
> node server1
> node server2
> primitive DLM ocf:pacemaker:controld \
> op monitor interval="120s"
> primitive DRBD ocf:linbit:drbd \
> params drbd_resource="r0" \
> operations $id="DRBD-operations" \
>  op monitor interval="20" role="Master" timeout="20" \
> op monitor interval="30" role="Slave" timeout="20"
> primitive FS ocf:heartbeat:Filesystem \
> params device="/dev/drbd1" directory="/drbd" fstype="ocfs2" \
> op monitor interval="120s" \
>  meta target-role="Started"
> primitive O2CB ocf:pacemaker:o2cb \
> op monitor interval="120s"
> primitive STONITH1 stonith:external/ipmi \
> params hostname="server1" ipaddr="10.0.0.1" userid="user" passwd="user"
> interface="lan" \
>  meta target-role="Started"
> primitive STONITH2 stonith:external/ipmi \
> params hostname="server2" ipaddr="10.0.0.2" userid="user" passwd="user"
> interface="lan" \
>  meta target-role="Started"
> ms ms-DRBD DRBD \
> meta resource-stickines="100" notify="true" master-max="2" interleave="true"
> target-role="Stopped"
> clone cloneDLM DLM \
> meta globally-unique="false" interleave="true" target-role="Started"
> clone cloneFS FS \
>  meta interleave="true" ordered="true"
> clone cloneO2CB O2CB \
> meta globally-unique="false" interleave="true" target-role="Started"
> location loc-stonith1 STONITH1 -inf: server1
> location loc-stonith2 STONITH2 -inf: server2
> colocation DLM-DRBD inf: cloneDLM ms-DRBD:Master
> colocation FS-O2CB inf: cloneFS cloneO2CB
> colocation O2CB-DLM inf: cloneO2CB cloneDLM
> order DLM-before-O2CB inf: cloneDLM:start cloneO2CB:start
> order DRBD-before-DLM inf: ms-DRBD:promote cloneDLM:start
> order O2CB-before-FS inf: cloneO2CB:start cloneFS:start
> property $id="cib-bootstrap-options" \
> dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
> cluster-infrastructure="openais" \
>  expected-quorum-votes="2" \
> no-quorum-policy="ignore" \
>  stonith-enabled="true" \
> stonith-action="poweroff" \
> default-resource-stickiness="1000"
> 
> 
> Is there something wrong?

The cluster configuration looks OK. Did you check the logs for
ipmi errors? Did you test the IPMI configuration? You can do that
using the stonith(8) program.

Thanks,

Dejan


> -----------------------------------
> 
> crm_mon -n:
> 
> Last updated: Wed May 26 00:04:53 2010
> Stack: openais
> Current DC: server1 - partition with quorum
> Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd
> 2 Nodes configured, 2 expected votes
> 6 Resources configured.
> ============
> 
> Node server2: online
>  DLM:0   (ocf::pacemaker:controld) Started
> O2CB:0  (ocf::pacemaker:o2cb) Started
>  FS:0    (ocf::heartbeat:Filesystem) Started
> DRBD:0  (ocf::linbit:drbd) Master
> Node server1: online
>  DRBD:1  (ocf::linbit:drbd) Master
> DLM:1   (ocf::pacemaker:controld) Started
>  O2CB:1  (ocf::pacemaker:o2cb) Started
> FS:1    (ocf::heartbeat:Filesystem) Started
> 
> Failed actions:
>    STONITH2_start_0 (node=server1, call=8, rc=1, status=complete): unknown
> erro r
>    STONITH1_start_0 (node=server2, call=8, rc=1, status=complete): unknown
> erro r
> 
> ------------------------
> 
> crm_verify -L -V:
> 
> crm_verify[5695]: 2010/05/26_00:17:19 WARN: unpack_rsc_op: Processing failed
> op STONITH2_start_0 on server1: unknown error(1)
> crm_verify[5695]: 2010/05/26_00:17:19 WARN: unpack_rsc_op: Processing failed
> op STONITH1_start_0 on server2: unknown error(1)
> crm_verify[5695]: 2010/05/26_00:17:19 WARN: common_apply_stickiness: Forcing
> STONITH1 away from server2 after 1000000 failures (max=1000000)
> crm_verify[5695]: 2010/05/26_00:17:19 WARN: common_apply_stickiness: Forcing
> STONITH2 away from server1 after 1000000 failures (max=1000000)
> 
> 
> I hope someone can help me,
> Thank you!
> 
> Matt

> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf





More information about the Pacemaker mailing list