[Pacemaker] Troube mounting filesystem (DRBD)

Denis Witt denis.witt at concepts-and-training.de
Tue Jun 4 09:38:57 EDT 2013


Hi List,

I'm trying to setup a Apache/DRBD cluster, but the Filesystem isn't
mounted. crm status always tells me "not installed" as status for the
filesystem primitive. Mounting the filesystem by hand works fine.

Here is my config:

root at test3:~# crm configure show
node test3
node test4
primitive apache lsb:apache2 \
	op monitor interval="10" timeout="20" \
	meta target-role="Started"
primitive drbd ocf:linbit:drbd \
	params drbd_resource="www_r0" \
	op monitor interval="10"
primitive fs_drbd ocf:heartbeat:Filesystem \
	params device="/dev/drbd0" directory="/var/www" fstype="ext4" \
	op monitor interval="5" \
	meta target-role="Started"
primitive pingtest ocf:pacemaker:ping \
	params multiplier="1000" host_list="192.168.100.19" \
	op monitor interval="5"
primitive sip ocf:heartbeat:IPaddr2 \
	params ip="192.168.100.30" nic="eth0" \
	op monitor interval="10" timeout="20" \
	meta target-role="Started"
group grp_all sip fs_drbd apache
ms ms_drbd drbd \
	meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true" clone clone_pingtest pingtest
location loc_all_on_best_ping grp_all \
	rule $id="loc_all_on_best_ping-rule" -inf: not_defined pingd or
pingd lt 1000 colocation coloc_all_on_drbd inf: grp_all ms_drbd:Master
order order_all_after_drbd inf: ms_drbd:promote grp_all:start
property $id="cib-bootstrap-options" \
	dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
	cluster-infrastructure="openais" \
	expected-quorum-votes="2" \
	no-quorum-policy="ignore" \
	stonith-enabled="false" \
	last-lrm-refresh="1370351760" \
	default-resource-stickiness="100"

The resulting status:

root at test3:~# crm status
============
Last updated: Tue Jun  4 15:33:11 2013
Last change: Tue Jun  4 15:16:00 2013 via crmd on test4
Stack: openais
Current DC: test4 - partition with quorum
Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
2 Nodes configured, 2 expected votes
7 Resources configured.
============

Online: [ test3 test4 ]

 Clone Set: clone_pingtest [pingtest]
     Started: [ test3 test4 ]
 Master/Slave Set: ms_drbd [drbd]
     Masters: [ test3 ]
     Slaves: [ test4 ]
 Resource Group: grp_all
     sip	(ocf::heartbeat:IPaddr2):	Started test3
     fs_drbd	(ocf::heartbeat:Filesystem):	Stopped 
     apache	(lsb:apache2):	Stopped 

Failed actions:
    fs_drbd_monitor_0 (node=test3, call=23, rc=5, status=complete): not installed
    fs_drbd_monitor_0 (node=test4, call=40, rc=5, status=complete): not installed

The logfile:

Jun  4 15:11:29 test3 pengine: [1761]: notice: LogActions: Start   fs_drbd#011(test3)
Jun  4 15:11:29 test3 crmd: [1762]: info: te_rsc_command: Initiating action 8: monitor fs_drbd_monitor_0 on test3 (local)
Jun  4 15:11:29 test3 lrmd: [1759]: info: rsc:fs_drbd probe[16] (pid 14419)
Jun  4 15:11:29 test3 cib: [1757]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='test4']//lrm_resource[@id='fs_drbd'] (origin=test4/crmd/25, version=0.11.7): ok (rc=0)
Jun  4 15:11:29 test3 crmd: [1762]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=fs_drbd_last_failure_0, magic=0:5;9:5:7:8571cc98-5a20-4d51-b175-fe4db979fc09, cib=0.11.8) : Resource op removal
Jun  4 15:11:29 test3 cib: [1757]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='test4']//lrm_resource[@id='fs_drbd'] (origin=test4/crmd/26, version=0.11.8): ok (rc=0)
Jun  4 15:11:29 test3 crmd: [1762]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=fs_drbd_last_failure_0, magic=0:5;9:5:7:8571cc98-5a20-4d51-b175-fe4db979fc09, cib=0.11.8) : Resource op removal
Jun  4 15:11:29 test3 lrmd: [1759]: info: operation monitor[16] on fs_drbd for client 1762: pid 14419 exited with return code 5
Jun  4 15:11:29 test3 crmd: [1762]: info: process_lrm_event: LRM operation fs_drbd_monitor_0 (call=16, rc=5, cib-update=73, confirmed=true) not installed
Jun  4 15:11:29 test3 crmd: [1762]: WARN: status_from_rc: Action 8 (fs_drbd_monitor_0) on test3 failed (target: 7 vs. rc: 5): Error
Jun  4 15:11:29 test3 crmd: [1762]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=fs_drbd_last_failure_0, magic=0:5;8:7:7:8571cc98-5a20-4d51-b175-fe4db979fc09, cib=0.12.3) : Event failed
Jun  4 15:11:29 test3 pengine: [1761]: notice: unpack_rsc_op: Preventing fs_drbd from re-starting on test3: operation monitor failed 'not installed' (rc=5)
Jun  4 15:11:29 test3 pengine: [1761]: notice: LogActions: Start   fs_drbd#011(test4)
Jun  4 15:11:29 test3 crmd: [1762]: info: te_rsc_command: Initiating action 9: monitor fs_drbd_monitor_0 on test4
Jun  4 15:11:29 test3 crmd: [1762]: WARN: status_from_rc: Action 9 (fs_drbd_monitor_0) on test4 failed (target: 7 vs. rc: 5): Error
Jun  4 15:11:29 test3 crmd: [1762]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=fs_drbd_last_failure_0, magic=0:5;9:8:7:8571cc98-5a20-4d51-b175-fe4db979fc09, cib=0.12.5) : Event failed
Jun  4 15:11:29 test3 pengine: [1761]: notice: unpack_rsc_op: Preventing fs_drbd from re-starting on test3: operation monitor failed 'not installed' (rc=5)
Jun  4 15:11:29 test3 pengine: [1761]: notice: unpack_rsc_op: Preventing fs_drbd from re-starting on test4: operation monitor failed 'not installed' (rc=5)

Any help is really appreciated as I'm really lost at this point. If you
need any more details I'm happy to provide them, but please keep in
mind that I'm very new to pacemaker, so please provide me some kind of
tip how to provide the requested information. Thanks a lot!

Best regards,
Denis Witt




More information about the Pacemaker mailing list