[Pacemaker] problem with pacemaker and stonith resources

Matteo motogintom at gmail.com
Tue May 25 18:37:51 EDT 2010


Hi folks,
I'm running a 2 node cluster with pacemaker, DRBD dual primary, ocfs2.
Now I'm trying to setup stonith correctly, but my stonith resources don't
start. I did some research but I didn't find a solution to my problem.

This is my cib:

node server1
node server2
primitive DLM ocf:pacemaker:controld \
op monitor interval="120s"
primitive DRBD ocf:linbit:drbd \
params drbd_resource="r0" \
operations $id="DRBD-operations" \
 op monitor interval="20" role="Master" timeout="20" \
op monitor interval="30" role="Slave" timeout="20"
primitive FS ocf:heartbeat:Filesystem \
params device="/dev/drbd1" directory="/drbd" fstype="ocfs2" \
op monitor interval="120s" \
 meta target-role="Started"
primitive O2CB ocf:pacemaker:o2cb \
op monitor interval="120s"
primitive STONITH1 stonith:external/ipmi \
params hostname="server1" ipaddr="10.0.0.1" userid="user" passwd="user"
interface="lan" \
 meta target-role="Started"
primitive STONITH2 stonith:external/ipmi \
params hostname="server2" ipaddr="10.0.0.2" userid="user" passwd="user"
interface="lan" \
 meta target-role="Started"
ms ms-DRBD DRBD \
meta resource-stickines="100" notify="true" master-max="2" interleave="true"
target-role="Stopped"
clone cloneDLM DLM \
meta globally-unique="false" interleave="true" target-role="Started"
clone cloneFS FS \
 meta interleave="true" ordered="true"
clone cloneO2CB O2CB \
meta globally-unique="false" interleave="true" target-role="Started"
location loc-stonith1 STONITH1 -inf: server1
location loc-stonith2 STONITH2 -inf: server2
colocation DLM-DRBD inf: cloneDLM ms-DRBD:Master
colocation FS-O2CB inf: cloneFS cloneO2CB
colocation O2CB-DLM inf: cloneO2CB cloneDLM
order DLM-before-O2CB inf: cloneDLM:start cloneO2CB:start
order DRBD-before-DLM inf: ms-DRBD:promote cloneDLM:start
order O2CB-before-FS inf: cloneO2CB:start cloneFS:start
property $id="cib-bootstrap-options" \
dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
cluster-infrastructure="openais" \
 expected-quorum-votes="2" \
no-quorum-policy="ignore" \
 stonith-enabled="true" \
stonith-action="poweroff" \
default-resource-stickiness="1000"


Is there something wrong?
-----------------------------------

crm_mon -n:

Last updated: Wed May 26 00:04:53 2010
Stack: openais
Current DC: server1 - partition with quorum
Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd
2 Nodes configured, 2 expected votes
6 Resources configured.
============

Node server2: online
 DLM:0   (ocf::pacemaker:controld) Started
O2CB:0  (ocf::pacemaker:o2cb) Started
 FS:0    (ocf::heartbeat:Filesystem) Started
DRBD:0  (ocf::linbit:drbd) Master
Node server1: online
 DRBD:1  (ocf::linbit:drbd) Master
DLM:1   (ocf::pacemaker:controld) Started
 O2CB:1  (ocf::pacemaker:o2cb) Started
FS:1    (ocf::heartbeat:Filesystem) Started

Failed actions:
   STONITH2_start_0 (node=server1, call=8, rc=1, status=complete): unknown
erro r
   STONITH1_start_0 (node=server2, call=8, rc=1, status=complete): unknown
erro r

------------------------

crm_verify -L -V:

crm_verify[5695]: 2010/05/26_00:17:19 WARN: unpack_rsc_op: Processing failed
op STONITH2_start_0 on server1: unknown error(1)
crm_verify[5695]: 2010/05/26_00:17:19 WARN: unpack_rsc_op: Processing failed
op STONITH1_start_0 on server2: unknown error(1)
crm_verify[5695]: 2010/05/26_00:17:19 WARN: common_apply_stickiness: Forcing
STONITH1 away from server2 after 1000000 failures (max=1000000)
crm_verify[5695]: 2010/05/26_00:17:19 WARN: common_apply_stickiness: Forcing
STONITH2 away from server1 after 1000000 failures (max=1000000)


I hope someone can help me,
Thank you!

Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100526/a714ed9b/attachment.html>


More information about the Pacemaker mailing list