[Pacemaker] GFS2 with Pacemaker on RHEL6.3 restarts with reboot

Bob Haxo bhaxo at sgi.com
Wed Aug 8 22:14:31 EDT 2012


Greetings.

I have followed the setup instructions of Clusters From Scratch :
Creating Active/Passive and Active/Active Clusters on Fedora, Edition 5,
including locating the new cman pages that do not seem to be linked into
the main document, for example, 

http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch08s02s02.html

The stack that I'm implementing includes RHEL6.3, drbd, dlm, gfs2,
Pacemaker (RHEL6.3 build), cman, kvm ... hopefully I didn't leave
anybody off the party list.

I have these all working together to support "live" migration of the
virt client between the two phys hosts, so at that level, all is good.

Questions: Is there a document that covers the fully covers such an
installation, meaning the extends the Cluster From Scratch (and replaces
the Apache example) to implementation of a HA virtual client? For
instance, should libvirtd be handled as a Pacemaker resource, or should
it be started as an system service at boot?  What should be done with
"libvirt-guests"?  Should cman be started as a system service at boot?

Problem: When the the non-VM-host is rebooted, then when Pacemaker
restarts the gfs2 filesystem gets restarted on the VM host, which causes
the stop and start of the VirtualDomain. The gfs2 filesystem also gets
restarted without of the VirtualDomain resource included. 

This behavior does not seem correct ... I think I would have flagged it
in my memory if I'd encountered the behavior when working with the SLES
HAE product.  I've been doing a lot of fumbling this past week trying to
get the colocation and order statements correct, without affecting this
behavior. 

What am I missing?

Here are the first indications of this restart issue during the restart
of Pacemaker and friends with the boot.  I have attached more messages.

Aug  8 20:00:57 hikari crmd[2734]:     info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-hikari2-master-drbd_r0.1, name=master-drbd_r0:1, value=5, magic=NA, cib=0.474.170) : Transient attribute: update
Aug  8 20:00:57 hikari crmd[2734]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Aug  8 20:00:57 hikari pengine[2733]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Aug  8 20:00:57 hikari pengine[2733]:   notice: LogActions: Promote drbd_r0:1#011(Slave -> Master hikari2)
Aug  8 20:00:57 hikari pengine[2733]:   notice: LogActions: Restart virt#011(Started hikari) <<<<<<<<<<<<<<<<<<
Aug  8 20:00:57 hikari pengine[2733]:   notice: LogActions: Restart shared-gfs2:0#011(Started hikari)  <<<<<<<<
Aug  8 20:00:57 hikari pengine[2733]:   notice: LogActions: Start   shared-gfs2:1#011(hikari2)
Aug  8 20:00:57 hikari crmd[2734]:     info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-hikari2-master-drbd_r1.1, name=master-drbd_r1:1, value=5, magic=NA, cib=0.474.171) : Transient attribute: update

Here are the current constraints resulting from fumbling (actually,
trying to make sense of all of the information obtained from a Google
searches):

colocation co-gfs-on-drbd inf: c_shared-gfs2 drbd_r0_clone:Master
order o-drbd_r0-then-gfs inf: drbd_r0_clone:promote c_shared-gfs2:start
order o-drbd_r1_clone-then-virt inf: drbd_r1_clone virt
order o-gfs-then-virt inf: c_shared-gfs2 virt

Full config file attached.

For reference, here is "service blah status" for the set of services:

[root at hikari2 ~]# ha-status
------- service corosync status -------
corosync (pid  1996) is running...
------- service cman status -------
cluster is running.
------- service drbd status -------
drbd driver loaded OK; device status:
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by
phil at Build64R6, 2012-04-17 11:28:08
m:res  cs         ro               ds                 p  mounted  fstype
1:r0   Connected  Primary/Primary  UpToDate/UpToDate  C  /shared  gfs2
2:r1   Connected  Primary/Primary  UpToDate/UpToDate  C
3:r2   Connected  Primary/Primary  UpToDate/UpToDate  C
------- service pacemaker status -------
pacemakerd (pid  8912) is running...
------- service gfs2 status -------
Configured GFS2 mountpoints:
/shared
Active GFS2 mountpoints:
/shared
------- service libvirtd status -------
libvirtd (pid  2510) is running...

[root at hikari ~]# crm_mon -1ro
============
Last updated: Wed Aug  8 21:01:47 2012
Last change: Wed Aug  8 20:48:49 2012 via cibadmin on hikari
Stack: cman
Current DC: hikari - partition with quorum
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
2 Nodes configured, 2 expected votes
11 Resources configured.
============

Online: [ hikari hikari2 ]

Full list of resources:

 Master/Slave Set: drbd_r0_clone [drbd_r0]
     Masters: [ hikari hikari2 ]
 Master/Slave Set: drbd_r1_clone [drbd_r1]
     Masters: [ hikari hikari2 ]
 Master/Slave Set: drbd_r2_clone [drbd_r2]
     Masters: [ hikari hikari2 ]
 ipmi-fencing-1 (stonith:fence_ipmilan):        Started hikari
 ipmi-fencing-2 (stonith:fence_ipmilan):        Started hikari2
 virt   (ocf::heartbeat:VirtualDomain): Started hikari
 Clone Set: c_shared-gfs2 [shared-gfs2]
     Started: [ hikari hikari2 ]

Operations:
* Node hikari2:
   drbd_r1:1: migration-threshold=1000000
    + (17) monitor: interval=60000ms rc=0 (ok)
    + (26) promote: rc=0 (ok)
   drbd_r0:1: migration-threshold=1000000
    + (21) promote: rc=0 (ok)
   drbd_r2:1: migration-threshold=1000000
    + (19) monitor: interval=60000ms rc=0 (ok)
    + (27) promote: rc=0 (ok)
   ipmi-fencing-2: migration-threshold=1000000
    + (12) start: rc=0 (ok)
    + (13) monitor: interval=240000ms rc=0 (ok)
   shared-gfs2:1: migration-threshold=1000000
    + (25) start: rc=0 (ok)
* Node hikari:
   drbd_r1:0: migration-threshold=1000000
    + (24) promote: rc=0 (ok)
   drbd_r2:0: migration-threshold=1000000
    + (25) promote: rc=0 (ok)
   shared-gfs2:0: migration-threshold=1000000
    + (92) start: rc=0 (ok)
   drbd_r0:0: migration-threshold=1000000
    + (23) promote: rc=0 (ok)
   ipmi-fencing-1: migration-threshold=1000000
    + (12) start: rc=0 (ok)
    + (13) monitor: interval=240000ms rc=0 (ok)
   virt: migration-threshold=1000000
    + (120) start: rc=0 (ok)
    + (121) monitor: interval=10000ms rc=0 (ok)

Thanks for reading ...
Bob Haxo
bhaxo at sgi.com
-------------- next part --------------
node hikari
node hikari2
primitive drbd_r0 ocf:linbit:drbd \
	params drbd_resource="r0" \
	op start interval="0" timeout="60" \
	op stop interval="0" timeout="60" \
	op monitor interval="60s"
primitive drbd_r1 ocf:linbit:drbd \
	params drbd_resource="r1" \
	op start interval="0" timeout="60" \
	op stop interval="0" timeout="60" \
	op monitor interval="60s"
primitive drbd_r2 ocf:linbit:drbd \
	params drbd_resource="r2" \
	op start interval="0" timeout="60" \
	op stop interval="0" timeout="60" \
	op monitor interval="60s"
primitive ipmi-fencing-1 stonith:fence_ipmilan \
	params ipaddr="150.166.33.248" login="ADMIN" passwd="ADMIN" lanplus="1" action="reboot" pcmk_host_check="static-list" pcmk_host_list="hikari2" \
	op monitor interval="240s" \
	meta target-role="Started"
primitive ipmi-fencing-2 stonith:fence_ipmilan \
	params ipaddr="150.166.33.247" login="ADMIN" passwd="ADMIN" lanplus="1" action="reboot" pcmk_host_check="static-list" pcmk_host_list="hikari" \
	op monitor interval="240s" \
	meta target-role="Started"
primitive shared-gfs2 ocf:heartbeat:Filesystem \
	params device="/dev/drbd/by-res/r0/0" directory="/shared" fstype="gfs2" \
	op start interval="0" timeout="60" \
	op stop interval="0" timeout="60"
primitive virt ocf:heartbeat:VirtualDomain \
	meta allow-migrate="true" target-role="Started" resource-stickiness="1" is-managed="true" \
	operations $id="virt-operations" \
	op monitor interval="10" timeout="200" \
	op start interval="0" timeout="200" \
	op stop interval="0" timeout="200" \
	op migrate_to interval="0" timeout="180" \
	op migrate_from interval="0" timeout="180" \
	params config="/shared/definitions/r1lead.xml" hypervisor="qemu:///system" migration_transport="ssh" migration_network_suffix="-ptp"
ms drbd_r0_clone drbd_r0 \
	meta master-max="2" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Started"
ms drbd_r1_clone drbd_r1 \
	meta master-max="2" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Started"
ms drbd_r2_clone drbd_r2 \
	meta master-max="2" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Started"
clone c_shared-gfs2 shared-gfs2 \
	meta target-role="Started"
location l-ipmi-fencing-1 ipmi-fencing-1 -inf: hikari2
location l-ipmi-fencing-2 ipmi-fencing-2 -inf: hikari
colocation co-gfs-on-drbd inf: c_shared-gfs2 drbd_r0_clone:Master
order o-drbd_r0-then-gfs inf: drbd_r0_clone:promote c_shared-gfs2:start
order o-drbd_r1_clone-then-virt inf: drbd_r1_clone virt
order o-gfs-then-virt inf: c_shared-gfs2 virt
property $id="cib-bootstrap-options" \
	dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
	cluster-infrastructure="cman" \
	expected-quorum-votes="2" \
	no-quorum-policy="ignore" \
	last-lrm-refresh="1344470433" \
	stonith-timeout="30s" \
	default-resource-stickiness="100" \
	maintenance-mode="false"
-------------- next part --------------
======================================================================

	  GFS & Virt restart with reboot of non-host server
			    bhaxo at sgi.com

======================================================================

Messages starting at line 197 indicate that gfs2 (Filesystem) and
VirtualDomain are being restarted on the virt host.


Aug  8 20:26:52 hikari stonith-ng[2730]:     info: stonith_command: Processed st_execute from lrmd: rc=-1
Aug  8 20:26:52 hikari stonith-ng[2730]:     info: log_operation: ipmi-fencing-1: Getting status of IPMI:xxx.xxx.33.248...Done
Aug  8 20:28:33 hikari kernel: bnx2 0000:0a:00.1: eth3: NIC Copper Link is Down
Aug  8 20:28:35 hikari kernel: bnx2 0000:0a:00.1: eth3: NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON

Aug  8 20:28:47 hikari corosync[2052]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Aug  8 20:28:47 hikari corosync[2052]:   [CMAN  ] quorum regained, resuming activity
Aug  8 20:28:47 hikari corosync[2052]:   [QUORUM] This node is within the primary component and will provide service.
Aug  8 20:28:47 hikari corosync[2052]:   [QUORUM] Members[2]: 1 2
Aug  8 20:28:47 hikari corosync[2052]:   [QUORUM] Members[2]: 1 2
Aug  8 20:28:47 hikari crmd[2734]:   notice: cman_event_callback: Membership 1400: quorum acquired
Aug  8 20:28:47 hikari crmd[2734]:     info: ais_status_callback: status: hikari2 is now member (was lost)
Aug  8 20:28:47 hikari crmd[2734]:     info: crm_update_peer: Node hikari2: id=2 state=member (new) addr=(null) votes=0 born=1400 seen=1400 proc=00000000000000000000000000000002
Aug  8 20:28:47 hikari crmd[2734]:     info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
Aug  8 20:28:47 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='hikari2']/lrm (origin=local/crmd/96, version=0.474.208): ok (rc=0)
Aug  8 20:28:47 hikari cib[2729]:     info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
Aug  8 20:28:47 hikari corosync[2052]:   [CPG   ] chosen downlist: sender r(0) ip(xxx.xxx.33.110) ; members(old:1 left:0)
Aug  8 20:28:47 hikari corosync[2052]:   [MAIN  ] Completed service synchronization, ready to provide service.
Aug  8 20:28:48 hikari crmd[2734]:     info: cman_event_callback: Membership 1400: quorum retained
Aug  8 20:28:48 hikari crmd[2734]:     info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=ipmi-fencing-1_last_0, magic=0:7;10:3:7:0972e66d-d0bb-4c2b-9ad8-7063a491c1f2, cib=0.474.208) : Resource op removal
Aug  8 20:28:48 hikari crmd[2734]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Aug  8 20:28:48 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='hikari2']/transient_attributes (origin=local/crmd/97, version=0.474.209): ok (rc=0)
Aug  8 20:28:48 hikari crmd[2734]:     info: abort_transition_graph: te_update_diff:194 - Triggered transition abort (complete=1, tag=transient_attributes, id=hikari2, magic=NA, cib=0.474.209) : Transient attribute: removal
Aug  8 20:28:48 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/98, version=0.474.210): ok (rc=0)
Aug  8 20:28:48 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/99, version=0.474.211): ok (rc=0)
Aug  8 20:28:48 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/101, version=0.474.213): ok (rc=0)
Aug  8 20:28:48 hikari pengine[2733]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Aug  8 20:28:48 hikari crmd[2734]:   notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Aug  8 20:28:48 hikari crmd[2734]:     info: do_te_invoke: Processing graph 11 (ref=pe_calc-dc-1344475728-173) derived from /var/lib/pengine/pe-input-2644.bz2
Aug  8 20:28:48 hikari crmd[2734]:   notice: run_graph: ==== Transition 11 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-2644.bz2): Complete
Aug  8 20:28:48 hikari crmd[2734]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Aug  8 20:28:48 hikari pengine[2733]:   notice: process_pe_message: Transition 11: PEngine Input stored in: /var/lib/pengine/pe-input-2644.bz2
Aug  8 20:28:56 hikari crmd[2734]:   notice: crmd_peer_update: Status update: Client hikari2/crmd now has status [online] (DC=true)
Aug  8 20:28:56 hikari crmd[2734]:   notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=crmd_peer_update ]
Aug  8 20:28:56 hikari crmd[2734]:     info: abort_transition_graph: do_te_invoke:169 - Triggered transition abort (complete=1) : Peer Halt
Aug  8 20:28:56 hikari crmd[2734]:     info: join_make_offer: Making join offers based on membership 1400
Aug  8 20:28:56 hikari crmd[2734]:     info: do_dc_join_offer_all: join-4: Waiting on 2 outstanding join acks
Aug  8 20:28:56 hikari crmd[2734]:     info: update_dc: Set DC to hikari (3.0.6)
Aug  8 20:28:58 hikari crmd[2734]:     info: do_dc_join_offer_all: A new node joined the cluster
Aug  8 20:28:58 hikari crmd[2734]:     info: do_dc_join_offer_all: join-5: Waiting on 2 outstanding join acks
Aug  8 20:28:58 hikari crmd[2734]:     info: update_dc: Set DC to hikari (3.0.6)

Aug  8 20:28:59 hikari crmd[2734]:   notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Aug  8 20:28:59 hikari crmd[2734]:     info: do_dc_join_finalize: join-5: Syncing the CIB from hikari to the rest of the cluster
Aug  8 20:28:59 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/108, version=0.474.215): ok (rc=0)
Aug  8 20:28:59 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/109, version=0.474.216): ok (rc=0)
Aug  8 20:28:59 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/110, version=0.474.217): ok (rc=0)
Aug  8 20:28:59 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='hikari2']/transient_attributes (origin=hikari2/crmd/6, version=0.474.218): ok (rc=0)


Aug  8 20:29:00 hikari crmd[2734]:     info: do_dc_join_ack: join-5: Updating node state to member for hikari2
Aug  8 20:29:00 hikari crmd[2734]:     info: do_dc_join_ack: join-5: Updating node state to member for hikari
Aug  8 20:29:00 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='hikari2']/lrm (origin=local/crmd/111, version=0.474.219): ok (rc=0)
Aug  8 20:29:00 hikari crmd[2734]:   notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Aug  8 20:29:00 hikari attrd[2732]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Aug  8 20:29:00 hikari attrd[2732]:   notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_r2:0 (5)
Aug  8 20:29:00 hikari crmd[2734]:     info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Aug  8 20:29:00 hikari crmd[2734]:     info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=drbd_r1:0_last_0, magic=0:0;46:55:0:2ff1fa67-9418-4db0-a37f-1af615599b38, cib=0.474.221) : Resource op removal
Aug  8 20:29:00 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='hikari']/lrm (origin=local/crmd/113, version=0.474.221): ok (rc=0)
Aug  8 20:29:00 hikari crmd[2734]:     info: abort_transition_graph: te_update_diff:276 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.474.222) : LRM Refresh
Aug  8 20:29:00 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/115, version=0.474.223): ok (rc=0)
Aug  8 20:29:00 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/117, version=0.474.225): ok (rc=0)

Aug  8 20:29:00 hikari attrd[2732]:   notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_r0:0 (5)
Aug  8 20:29:00 hikari attrd[2732]:   notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_r1:0 (5)
Aug  8 20:29:00 hikari pengine[2733]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Aug  8 20:29:00 hikari attrd[2732]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Aug  8 20:29:00 hikari pengine[2733]:   notice: LogActions: Start   drbd_r0:1#011(hikari2)
Aug  8 20:29:00 hikari pengine[2733]:   notice: LogActions: Start   drbd_r1:1#011(hikari2)
Aug  8 20:29:00 hikari pengine[2733]:   notice: LogActions: Start   drbd_r2:1#011(hikari2)
Aug  8 20:29:00 hikari pengine[2733]:   notice: LogActions: Start   ipmi-fencing-2#011(hikari2)
Aug  8 20:29:00 hikari crmd[2734]:   notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Aug  8 20:29:00 hikari crmd[2734]:     info: do_te_invoke: Processing graph 12 (ref=pe_calc-dc-1344475740-183) derived from /var/lib/pengine/pe-input-2645.bz2
Aug  8 20:29:00 hikari crmd[2734]:     info: te_rsc_command: Initiating action 7: monitor drbd_r0:1_monitor_0 on hikari2
Aug  8 20:29:00 hikari crmd[2734]:     info: te_rsc_command: Initiating action 8: monitor drbd_r1:1_monitor_0 on hikari2
Aug  8 20:29:00 hikari crmd[2734]:     info: te_rsc_command: Initiating action 9: monitor drbd_r2:1_monitor_0 on hikari2
Aug  8 20:29:00 hikari crmd[2734]:     info: te_rsc_command: Initiating action 10: monitor ipmi-fencing-1_monitor_0 on hikari2
Aug  8 20:29:00 hikari crmd[2734]:     info: te_rsc_command: Initiating action 11: monitor ipmi-fencing-2_monitor_0 on hikari2
Aug  8 20:29:00 hikari crmd[2734]:     info: te_rsc_command: Initiating action 12: monitor virt_monitor_0 on hikari2
Aug  8 20:29:00 hikari crmd[2734]:     info: te_rsc_command: Initiating action 13: monitor shared-gfs2:0_monitor_0 on hikari2
Aug  8 20:29:00 hikari crmd[2734]:     info: te_rsc_command: Initiating action 125: notify drbd_r0:0_pre_notify_start_0 on hikari (local)
Aug  8 20:29:00 hikari lrmd: [2731]: info: rsc:drbd_r0:0:76: notify
Aug  8 20:29:00 hikari crmd[2734]:     info: te_rsc_command: Initiating action 136: notify drbd_r1:0_pre_notify_start_0 on hikari (local)
Aug  8 20:29:00 hikari lrmd: [2731]: info: rsc:drbd_r1:0:77: notify
Aug  8 20:29:00 hikari crmd[2734]:     info: te_rsc_command: Initiating action 147: notify drbd_r2:0_pre_notify_start_0 on hikari (local)
Aug  8 20:29:00 hikari lrmd: [2731]: info: rsc:drbd_r2:0:78: notify
Aug  8 20:29:00 hikari lrmd: [2731]: info: RA output: (drbd_r1:0:notify:stdout)
Aug  8 20:29:00 hikari lrmd: [2731]: info: RA output: (drbd_r0:0:notify:stdout)
Aug  8 20:29:00 hikari lrmd: [2731]: info: RA output: (drbd_r2:0:notify:stdout)
Aug  8 20:29:00 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r0:0_notify_0 (call=76, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:29:00 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r1:0_notify_0 (call=77, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:29:00 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r2:0_notify_0 (call=78, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:29:00 hikari pengine[2733]:   notice: process_pe_message: Transition 12: PEngine Input stored in: /var/lib/pengine/pe-input-2645.bz2

Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 6: probe_complete probe_complete on hikari2 - no waiting
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 18: start drbd_r0:1_start_0 on hikari2
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 48: start drbd_r1:1_start_0 on hikari2
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 78: start drbd_r2:1_start_0 on hikari2
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 106: start ipmi-fencing-2_start_0 on hikari2
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 107: monitor ipmi-fencing-2_monitor_240000 on hikari2
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 148: notify drbd_r2:0_post_notify_start_0 on hikari (local)
Aug  8 20:29:01 hikari lrmd: [2731]: info: rsc:drbd_r2:0:79: notify
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 149: notify drbd_r2:1_post_notify_start_0 on hikari2
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 126: notify drbd_r0:0_post_notify_start_0 on hikari (local)
Aug  8 20:29:01 hikari lrmd: [2731]: info: rsc:drbd_r0:0:80: notify
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 127: notify drbd_r0:1_post_notify_start_0 on hikari2
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 137: notify drbd_r1:0_post_notify_start_0 on hikari (local)
Aug  8 20:29:01 hikari lrmd: [2731]: info: rsc:drbd_r1:0:81: notify
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 138: notify drbd_r1:1_post_notify_start_0 on hikari2
Aug  8 20:29:01 hikari lrmd: [2731]: info: RA output: (drbd_r2:0:notify:stdout)
Aug  8 20:29:01 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r2:0_notify_0 (call=79, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:29:01 hikari lrmd: [2731]: info: RA output: (drbd_r0:0:notify:stdout)
Aug  8 20:29:01 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r0:0_notify_0 (call=80, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:29:01 hikari lrmd: [2731]: info: RA output: (drbd_r1:0:notify:stdout)
Aug  8 20:29:01 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r1:0_notify_0 (call=81, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 19: monitor drbd_r0:1_monitor_60000 on hikari2
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 79: monitor drbd_r2:1_monitor_60000 on hikari2
Aug  8 20:29:01 hikari crmd[2734]:     info: te_rsc_command: Initiating action 49: monitor drbd_r1:1_monitor_60000 on hikari2
Aug  8 20:29:02 hikari crmd[2734]:   notice: run_graph: ==== Transition 12 (Complete=44, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-2645.bz2): Complete
Aug  8 20:29:02 hikari crmd[2734]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Aug  8 20:29:02 hikari kernel: d-con r2: Handshake successful: Agreed network protocol version 100
Aug  8 20:29:02 hikari kernel: d-con r2: conn( WFConnection -> WFReportParams )
Aug  8 20:29:02 hikari kernel: d-con r2: Starting asender thread (from drbd_r_r2 [3178])
Aug  8 20:29:02 hikari kernel: d-con r1: Handshake successful: Agreed network protocol version 100
Aug  8 20:29:02 hikari kernel: d-con r1: conn( WFConnection -> WFReportParams )
Aug  8 20:29:02 hikari kernel: d-con r1: Starting asender thread (from drbd_r_r1 [3160])
Aug  8 20:29:02 hikari kernel: d-con r0: Handshake successful: Agreed network protocol version 100
Aug  8 20:29:02 hikari kernel: d-con r0: conn( WFConnection -> WFReportParams )
Aug  8 20:29:02 hikari kernel: d-con r0: Starting asender thread (from drbd_r_r0 [3173])
Aug  8 20:29:02 hikari kernel: block drbd3: drbd_sync_handshake:
Aug  8 20:29:02 hikari kernel: block drbd3: self BCF4DD33D4742C0B:01C7B124A030DD79:70D7BF522564C127:70D6BF522564C127 bits:0 flags:0
Aug  8 20:29:02 hikari kernel: block drbd3: peer 01C7B124A030DD78:0000000000000000:70D7BF522564C126:70D6BF522564C127 bits:0 flags:0
Aug  8 20:29:02 hikari kernel: block drbd3: uuid_compare()=1 by rule 70
Aug  8 20:29:02 hikari kernel: block drbd3: peer( Unknown -> Secondary ) conn( WFReportParams -> WFBitMapS ) pdsk( Outdated -> Consistent )
Aug  8 20:29:02 hikari kernel: block drbd3: send bitmap stats [Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Aug  8 20:29:02 hikari kernel: block drbd2: drbd_sync_handshake:
Aug  8 20:29:02 hikari kernel: block drbd2: self 7E9DAD1956B8DD55:B95815C06A35E43F:48A1C09854CE2B6D:48A0C09854CE2B6D bits:13 flags:0
Aug  8 20:29:02 hikari kernel: block drbd2: peer B95815C06A35E43E:0000000000000000:48A1C09854CE2B6C:48A0C09854CE2B6D bits:0 flags:0
Aug  8 20:29:02 hikari kernel: block drbd2: uuid_compare()=1 by rule 70
Aug  8 20:29:02 hikari kernel: block drbd2: peer( Unknown -> Secondary ) conn( WFReportParams -> WFBitMapS ) pdsk( Outdated -> Consistent )
Aug  8 20:29:02 hikari kernel: block drbd1: drbd_sync_handshake:
Aug  8 20:29:02 hikari kernel: block drbd1: self 3ED1C91C72980525:50F9C1C8E1CFE61F:F8F70763BCBD5E5F:F8F60763BCBD5E5F bits:0 flags:0
Aug  8 20:29:02 hikari kernel: block drbd1: peer 50F9C1C8E1CFE61E:0000000000000000:F8F70763BCBD5E5E:F8F60763BCBD5E5F bits:0 flags:0
Aug  8 20:29:02 hikari kernel: block drbd1: uuid_compare()=1 by rule 70
Aug  8 20:29:02 hikari kernel: block drbd1: peer( Unknown -> Secondary ) conn( WFReportParams -> WFBitMapS ) pdsk( Outdated -> Consistent )
Aug  8 20:29:02 hikari kernel: block drbd1: send bitmap stats [Bytes(packets)]: plain 0(0), RLE 21(1), total 21; compression: 100.0%
Aug  8 20:29:02 hikari kernel: block drbd2: send bitmap stats [Bytes(packets)]: plain 0(0), RLE 31(1), total 31; compression: 100.0%
Aug  8 20:29:02 hikari kernel: block drbd3: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Aug  8 20:29:02 hikari kernel: block drbd3: helper command: /sbin/drbdadm before-resync-source minor-3
Aug  8 20:29:02 hikari kernel: block drbd3: helper command: /sbin/drbdadm before-resync-source minor-3 exit code 0 (0x0)
Aug  8 20:29:02 hikari kernel: block drbd3: conn( WFBitMapS -> SyncSource ) pdsk( Consistent -> Inconsistent )
Aug  8 20:29:02 hikari kernel: block drbd3: Began resync as SyncSource (will sync 0 KB [0 bits set]).
Aug  8 20:29:02 hikari kernel: block drbd3: updated sync UUID BCF4DD33D4742C0B:01C8B124A030DD79:01C7B124A030DD79:70D7BF522564C127
Aug  8 20:29:02 hikari kernel: block drbd2: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE 31(1), total 31; compression: 100.0%
Aug  8 20:29:02 hikari kernel: block drbd2: helper command: /sbin/drbdadm before-resync-source minor-2
Aug  8 20:29:02 hikari kernel: block drbd1: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE 21(1), total 21; compression: 100.0%
Aug  8 20:29:02 hikari kernel: block drbd1: helper command: /sbin/drbdadm before-resync-source minor-1
Aug  8 20:29:02 hikari kernel: block drbd2: helper command: /sbin/drbdadm before-resync-source minor-2 exit code 0 (0x0)
Aug  8 20:29:02 hikari kernel: block drbd2: conn( WFBitMapS -> SyncSource ) pdsk( Consistent -> Inconsistent )
Aug  8 20:29:02 hikari kernel: block drbd2: Began resync as SyncSource (will sync 52 KB [13 bits set]).
Aug  8 20:29:02 hikari kernel: block drbd2: updated sync UUID 7E9DAD1956B8DD55:B95915C06A35E43F:B95815C06A35E43F:48A1C09854CE2B6D
Aug  8 20:29:02 hikari kernel: block drbd1: helper command: /sbin/drbdadm before-resync-source minor-1 exit code 0 (0x0)
Aug  8 20:29:02 hikari kernel: block drbd1: conn( WFBitMapS -> SyncSource ) pdsk( Consistent -> Inconsistent )
Aug  8 20:29:02 hikari kernel: block drbd1: Began resync as SyncSource (will sync 0 KB [0 bits set]).
Aug  8 20:29:02 hikari kernel: block drbd1: updated sync UUID 3ED1C91C72980525:50FAC1C8E1CFE61F:50F9C1C8E1CFE61F:F8F70763BCBD5E5F
Aug  8 20:29:02 hikari kernel: block drbd3: Resync done (total 1 sec; paused 0 sec; 0 K/sec)
Aug  8 20:29:02 hikari kernel: block drbd3: updated UUIDs BCF4DD33D4742C0B:0000000000000000:01C8B124A030DD79:01C7B124A030DD79
Aug  8 20:29:02 hikari kernel: block drbd3: conn( SyncSource -> Connected ) pdsk( Inconsistent -> UpToDate )
Aug  8 20:29:02 hikari kernel: block drbd3: bitmap WRITE of 0 pages took 0 jiffies
Aug  8 20:29:02 hikari kernel: block drbd3: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
Aug  8 20:29:02 hikari kernel: block drbd1: Resync done (total 1 sec; paused 0 sec; 0 K/sec)
Aug  8 20:29:02 hikari kernel: block drbd1: updated UUIDs 3ED1C91C72980525:0000000000000000:50FAC1C8E1CFE61F:50F9C1C8E1CFE61F
Aug  8 20:29:02 hikari kernel: block drbd1: conn( SyncSource -> Connected ) pdsk( Inconsistent -> UpToDate )
Aug  8 20:29:02 hikari kernel: block drbd1: bitmap WRITE of 0 pages took 0 jiffies
Aug  8 20:29:02 hikari kernel: block drbd1: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
Aug  8 20:29:02 hikari kernel: block drbd2: Resync done (total 1 sec; paused 0 sec; 52 K/sec)
Aug  8 20:29:02 hikari kernel: block drbd2: updated UUIDs 7E9DAD1956B8DD55:0000000000000000:B95915C06A35E43F:B95815C06A35E43F
Aug  8 20:29:02 hikari kernel: block drbd2: conn( SyncSource -> Connected ) pdsk( Inconsistent -> UpToDate )
Aug  8 20:29:02 hikari kernel: block drbd2: bitmap WRITE of 0 pages took 0 jiffies
Aug  8 20:29:02 hikari kernel: block drbd2: 0 KB (0 bits) marked out-of-sync by on disk bit-map.

Aug  8 20:30:02 hikari crmd[2734]:     info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-hikari2-master-drbd_r0.1, name=master-drbd_r0:1, value=5, magic=NA, cib=0.474.248) : Transient attribute: update
Aug  8 20:30:02 hikari crmd[2734]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Aug  8 20:30:02 hikari crmd[2734]:     info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-hikari2-master-drbd_r2.1, name=master-drbd_r2:1, value=5, magic=NA, cib=0.474.249) : Transient attribute: update
Aug  8 20:30:02 hikari pengine[2733]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Aug  8 20:30:02 hikari pengine[2733]:   notice: LogActions: Promote drbd_r0:1#011(Slave -> Master hikari2)
Aug  8 20:30:02 hikari pengine[2733]:   notice: LogActions: Promote drbd_r2:1#011(Slave -> Master hikari2)
Aug  8 20:30:02 hikari pengine[2733]:   notice: LogActions: Restart virt#011(Started hikari) <<<<<<<<<<<<<<<<<<<<<<<<<<<< BAD BAD BAD
Aug  8 20:30:02 hikari pengine[2733]:   notice: LogActions: Restart shared-gfs2:0#011(Started hikari) <<<<<<<<<<<<<<<<<<< BAD BAD BAD
Aug  8 20:30:02 hikari pengine[2733]:   notice: LogActions: Start   shared-gfs2:1#011(hikari2)
Aug  8 20:30:02 hikari crmd[2734]:   notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Aug  8 20:30:02 hikari crmd[2734]:     info: do_te_invoke: Processing graph 13 (ref=pe_calc-dc-1344475802-215) derived from /var/lib/pengine/pe-input-2646.bz2
Aug  8 20:30:02 hikari crmd[2734]:     info: te_rsc_command: Initiating action 3: cancel drbd_r0:1_monitor_60000 on hikari2
Aug  8 20:30:02 hikari crmd[2734]:     info: te_rsc_command: Initiating action 2: cancel drbd_r2:1_monitor_60000 on hikari2
Aug  8 20:30:02 hikari crmd[2734]:     info: te_rsc_command: Initiating action 110: stop virt_stop_0 on hikari (local)
Aug  8 20:30:02 hikari lrmd: [2731]: info: cancel_op: operation monitor[62] on ocf::VirtualDomain::virt for client 2734, its parameters: hypervisor=[qemu:///system] CRM_meta_timeout=[200000] config=[/shared/definitions/r1lead.xml] migration_network_suffix=[-ptp] crm_feature_set=[3.0.6] CRM_meta_name=[monitor] migration_transport=[ssh] CRM_meta_interval=[10000]  cancelled
Aug  8 20:30:02 hikari lrmd: [2731]: info: rsc:virt:82: stop
Aug  8 20:30:02 hikari crmd[2734]:     info: process_lrm_event: LRM operation virt_monitor_10000 (call=62, status=1, cib-update=0, confirmed=true) Cancelled
Aug  8 20:30:02 hikari crmd[2734]:     info: te_rsc_command: Initiating action 130: notify drbd_r0:0_pre_notify_promote_0 on hikari (local)
Aug  8 20:30:02 hikari lrmd: [2731]: info: rsc:drbd_r0:0:83: notify
Aug  8 20:30:02 hikari crmd[2734]:     info: te_rsc_command: Initiating action 132: notify drbd_r0:1_pre_notify_promote_0 on hikari2
Aug  8 20:30:02 hikari crmd[2734]:     info: te_rsc_command: Initiating action 162: notify drbd_r2:0_pre_notify_promote_0 on hikari (local)
Aug  8 20:30:02 hikari lrmd: [2731]: info: rsc:drbd_r2:0:84: notify
Aug  8 20:30:02 hikari crmd[2734]:     info: te_rsc_command: Initiating action 164: notify drbd_r2:1_pre_notify_promote_0 on hikari2
Aug  8 20:30:02 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r2:0_notify_0 (call=84, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:30:02 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r0:0_notify_0 (call=83, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:30:02 hikari crmd[2734]:     info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-hikari2-master-drbd_r1.1, name=master-drbd_r1:1, value=5, magic=NA, cib=0.474.252) : Transient attribute: update
Aug  8 20:30:02 hikari VirtualDomain(virt)[11758]: INFO: Issuing graceful shutdown request for domain r1lead.
Aug  8 20:30:02 hikari pengine[2733]:   notice: process_pe_message: Transition 13: PEngine Input stored in: /var/lib/pengine/pe-input-2646.bz2
Aug  8 20:30:02 hikari lrmd: [2731]: info: RA output: (virt:stop:stdout) Domain r1lead is being shutdown


Aug  8 20:30:15 hikari avahi-daemon[2280]: Withdrawing address record for fe80::fc54:ff:feab:d7c6 on vnet0.
Aug  8 20:30:15 hikari kernel: br0: port 2(vnet0) entering disabled state
Aug  8 20:30:15 hikari kernel: device vnet0 left promiscuous mode
Aug  8 20:30:15 hikari kernel: br0: port 2(vnet0) entering disabled state
Aug  8 20:30:15 hikari crmd[2734]:     info: process_lrm_event: LRM operation virt_stop_0 (call=82, rc=0, cib-update=123, confirmed=true) ok
Aug  8 20:30:15 hikari crmd[2734]:   notice: run_graph: ==== Transition 13 (Complete=9, Pending=0, Fired=0, Skipped=14, Incomplete=12, Source=/var/lib/pengine/pe-input-2646.bz2): Stopped
Aug  8 20:30:15 hikari crmd[2734]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Aug  8 20:30:15 hikari pengine[2733]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Aug  8 20:30:15 hikari pengine[2733]:   notice: LogActions: Promote drbd_r0:1#011(Slave -> Master hikari2)
Aug  8 20:30:15 hikari pengine[2733]:   notice: LogActions: Promote drbd_r1:1#011(Slave -> Master hikari2)
Aug  8 20:30:15 hikari pengine[2733]:   notice: LogActions: Promote drbd_r2:1#011(Slave -> Master hikari2)
Aug  8 20:30:15 hikari pengine[2733]:   notice: LogActions: Start   virt#011(hikari)
Aug  8 20:30:15 hikari pengine[2733]:   notice: LogActions: Restart shared-gfs2:0#011(Started hikari)
Aug  8 20:30:15 hikari pengine[2733]:   notice: LogActions: Start   shared-gfs2:1#011(hikari2)
Aug  8 20:30:15 hikari crmd[2734]:   notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Aug  8 20:30:15 hikari crmd[2734]:     info: do_te_invoke: Processing graph 14 (ref=pe_calc-dc-1344475815-225) derived from /var/lib/pengine/pe-input-2647.bz2
Aug  8 20:30:15 hikari crmd[2734]:     info: te_rsc_command: Initiating action 3: cancel drbd_r0:1_monitor_60000 on hikari2
Aug  8 20:30:15 hikari crmd[2734]:     info: te_rsc_command: Initiating action 1: cancel drbd_r1:1_monitor_60000 on hikari2
Aug  8 20:30:15 hikari crmd[2734]:     info: te_rsc_command: Initiating action 2: cancel drbd_r2:1_monitor_60000 on hikari2
Aug  8 20:30:15 hikari crmd[2734]:     info: te_rsc_command: Initiating action 129: notify drbd_r0:0_pre_notify_promote_0 on hikari (local)
Aug  8 20:30:15 hikari lrmd: [2731]: info: rsc:drbd_r0:0:85: notify
Aug  8 20:30:15 hikari crmd[2734]:     info: te_rsc_command: Initiating action 131: notify drbd_r0:1_pre_notify_promote_0 on hikari2
Aug  8 20:30:15 hikari crmd[2734]:     info: te_rsc_command: Initiating action 145: notify drbd_r1:0_pre_notify_promote_0 on hikari (local)
Aug  8 20:30:15 hikari lrmd: [2731]: info: rsc:drbd_r1:0:86: notify
Aug  8 20:30:15 hikari crmd[2734]:     info: te_rsc_command: Initiating action 147: notify drbd_r1:1_pre_notify_promote_0 on hikari2
Aug  8 20:30:15 hikari crmd[2734]:     info: te_rsc_command: Initiating action 161: notify drbd_r2:0_pre_notify_promote_0 on hikari (local)
Aug  8 20:30:15 hikari lrmd: [2731]: info: rsc:drbd_r2:0:87: notify
Aug  8 20:30:15 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='hikari2']//lrm_resource[@id='drbd_r0:1']/lrm_rsc_op[@id='drbd_r0:1_monitor_60000'] (origin=hikari2/crmd/24, version=0.474.254): ok (rc=0)
Aug  8 20:30:15 hikari crmd[2734]:     info: te_rsc_command: Initiating action 163: notify drbd_r2:1_pre_notify_promote_0 on hikari2
Aug  8 20:30:15 hikari crmd[2734]:     info: te_rsc_command: Initiating action 112: stop shared-gfs2:0_stop_0 on hikari (local)
Aug  8 20:30:15 hikari lrmd: [2731]: info: rsc:shared-gfs2:0:88: stop
Aug  8 20:30:15 hikari cib[2729]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='hikari2']//lrm_resource[@id='drbd_r2:1']/lrm_rsc_op[@id='drbd_r2:1_monitor_60000'] (origin=hikari2/crmd/25, version=0.474.255): ok (rc=0)
Aug  8 20:30:15 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r0:0_notify_0 (call=85, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:30:16 hikari Filesystem(shared-gfs2:0)[11919]: INFO: Running stop for /dev/drbd/by-res/r0/0 on /shared
Aug  8 20:30:16 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r1:0_notify_0 (call=86, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:30:16 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r2:0_notify_0 (call=87, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 17: promote drbd_r0:1_promote_0 on hikari2
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 49: promote drbd_r1:1_promote_0 on hikari2
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 81: promote drbd_r2:1_promote_0 on hikari2
Aug  8 20:30:16 hikari pengine[2733]:   notice: process_pe_message: Transition 14: PEngine Input stored in: /var/lib/pengine/pe-input-2647.bz2
Aug  8 20:30:16 hikari Filesystem(shared-gfs2:0)[11919]: INFO: Trying to unmount /shared
Aug  8 20:30:16 hikari kernel: block drbd1: peer( Secondary -> Primary )
Aug  8 20:30:16 hikari kernel: block drbd2: peer( Secondary -> Primary )
Aug  8 20:30:16 hikari Filesystem(shared-gfs2:0)[11919]: INFO: unmounted /shared successfully
Aug  8 20:30:16 hikari kernel: block drbd3: peer( Secondary -> Primary )
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 130: notify drbd_r0:0_post_notify_promote_0 on hikari (local)
Aug  8 20:30:16 hikari lrmd: [2731]: info: rsc:drbd_r0:0:89: notify
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 132: notify drbd_r0:1_post_notify_promote_0 on hikari2
Aug  8 20:30:16 hikari crmd[2734]:     info: process_lrm_event: LRM operation shared-gfs2:0_stop_0 (call=88, rc=0, cib-update=125, confirmed=true) ok
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 146: notify drbd_r1:0_post_notify_promote_0 on hikari (local)
Aug  8 20:30:16 hikari lrmd: [2731]: info: rsc:drbd_r1:0:90: notify
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 148: notify drbd_r1:1_post_notify_promote_0 on hikari2
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 162: notify drbd_r2:0_post_notify_promote_0 on hikari (local)
Aug  8 20:30:16 hikari lrmd: [2731]: info: rsc:drbd_r2:0:91: notify
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 164: notify drbd_r2:1_post_notify_promote_0 on hikari2
Aug  8 20:30:16 hikari lrmd: [2731]: info: RA output: (drbd_r0:0:notify:stdout)
Aug  8 20:30:16 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r0:0_notify_0 (call=89, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 113: start shared-gfs2:0_start_0 on hikari (local)
Aug  8 20:30:16 hikari lrmd: [2731]: info: rsc:shared-gfs2:0:92: start
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 114: start shared-gfs2:1_start_0 on hikari2
Aug  8 20:30:16 hikari lrmd: [2731]: info: RA output: (drbd_r1:0:notify:stdout)
Aug  8 20:30:16 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r1:0_notify_0 (call=90, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:30:16 hikari lrmd: [2731]: info: RA output: (drbd_r2:0:notify:stdout)
Aug  8 20:30:16 hikari crmd[2734]:     info: process_lrm_event: LRM operation drbd_r2:0_notify_0 (call=91, rc=0, cib-update=0, confirmed=true) ok
Aug  8 20:30:16 hikari Filesystem(shared-gfs2:0)[12098]: INFO: Running start for /dev/drbd/by-res/r0/0 on /shared
Aug  8 20:30:16 hikari kernel: GFS2: fsid=: Trying to join cluster "lock_dlm", "rlc-cluster:shared"
Aug  8 20:30:16 hikari kernel: dlm: Using TCP for communications
Aug  8 20:30:16 hikari kernel: GFS2: fsid=rlc-cluster:shared.0: Joined cluster. Now mounting FS...
Aug  8 20:30:16 hikari kernel: GFS2: fsid=rlc-cluster:shared.0: jid=0, already locked for use
Aug  8 20:30:16 hikari kernel: GFS2: fsid=rlc-cluster:shared.0: jid=0: Looking at journal...
Aug  8 20:30:16 hikari kernel: GFS2: fsid=rlc-cluster:shared.0: jid=0: Done
Aug  8 20:30:16 hikari kernel: GFS2: fsid=rlc-cluster:shared.0: jid=1: Trying to acquire journal lock...
Aug  8 20:30:16 hikari kernel: GFS2: fsid=rlc-cluster:shared.0: jid=1: Looking at journal...
Aug  8 20:30:16 hikari kernel: GFS2: fsid=rlc-cluster:shared.0: jid=1: Done
Aug  8 20:30:16 hikari ntpd[2431]: Deleting interface #12 vnet0, fe80::fc54:ff:feab:d7c6#123, interface stats: received=0, sent=0, dropped=0, active_time=1741 secs
Aug  8 20:30:16 hikari kernel: dlm: connecting to 2
Aug  8 20:30:16 hikari crmd[2734]:     info: process_lrm_event: LRM operation shared-gfs2:0_start_0 (call=92, rc=0, cib-update=126, confirmed=true) ok
Aug  8 20:30:16 hikari crmd[2734]:     info: te_rsc_command: Initiating action 110: start virt_start_0 on hikari (local)
Aug  8 20:30:16 hikari lrmd: [2731]: info: rsc:virt:93: start
Aug  8 20:30:17 hikari VirtualDomain(virt)[12222]: INFO: Domain name "r1lead" saved to /var/run/heartbeat/rsctmp/VirtualDomain-virt.state.
Aug  8 20:30:17 hikari kernel: device vnet0 entered promiscuous mode
Aug  8 20:30:17 hikari kernel: br0: port 2(vnet0) entering forwarding state
Aug  8 20:30:17 hikari qemu-kvm: Could not find keytab file: /etc/qemu/krb5.tab: No such file or directory
Aug  8 20:30:17 hikari lrmd: [2731]: info: RA output: (virt:start:stdout) Domain r1lead started
Aug  8 20:30:17 hikari crmd[2734]:     info: process_lrm_event: LRM operation virt_start_0 (call=93, rc=0, cib-update=127, confirmed=true) ok
Aug  8 20:30:17 hikari crmd[2734]:     info: te_rsc_command: Initiating action 111: monitor virt_monitor_10000 on hikari (local)
Aug  8 20:30:17 hikari lrmd: [2731]: info: rsc:virt:94: monitor
Aug  8 20:30:17 hikari crmd[2734]:     info: process_lrm_event: LRM operation virt_monitor_10000 (call=94, rc=0, cib-update=128, confirmed=false) ok
Aug  8 20:30:17 hikari crmd[2734]:   notice: run_graph: ==== Transition 14 (Complete=46, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-2647.bz2): Complete
Aug  8 20:30:17 hikari crmd[2734]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Aug  8 20:30:19 hikari avahi-daemon[2280]: Registering new address record for fe80::fc54:ff:feab:d7c6 on vnet0.*.
Aug  8 20:30:20 hikari ntpd[2431]: Listening on interface #13 vnet0, fe80::fc54:ff:feab:d7c6#123 Enabled
Aug  8 20:30:27 hikari kernel: kvm: 12266: cpu0 disabled perfctr wrmsr: 0xc1 data 0xabcd
Aug  8 20:30:52 hikari stonith-ng[2730]:     info: stonith_command: Processed st_execute from lrmd: rc=-1
Aug  8 20:30:52 hikari stonith-ng[2730]:     info: log_operation: ipmi-fencing-1: Getting status of IPMI:xxx.xxx.33.248...Done




Aug  8 20:34:46 hikari cib[2729]:     info: cib_stats: Processed 120 operations (9416.00us average, 0% utilization) in the last 10min
Aug  8 20:34:52 hikari lrmd: [2731]: info: rsc:ipmi-fencing-1:13: monitor
Aug  8 20:34:52 hikari stonith-ng[2730]:     info: stonith_command: Processed st_execute from lrmd: rc=-1
Aug  8 20:34:52 hikari stonith-ng[2730]:     info: log_operation: ipmi-fencing-1: Getting status of IPMI:xxx.xxx.33.248...Done
                                                               


More information about the Pacemaker mailing list