[Pacemaker] trying to set up sbd stonith
Sander van Vugt
mail at sandervanvugt.nl
Wed Feb 24 19:53:40 UTC 2010
Hi,
STONITH seems to be driving me to feel like SMITH lately, so after
unsuccessful attempts to get drac5 and rackpdu to do their work, I'm now
focusing on the external/sbd plugin. It doesn't work too well though, so
if anyone can give me a hint, I would appreciate.
Here's what I've done:
0. Installed SLES with the complete HAE, patched it up to most recent
state.
1. Created a 1 GB LUN on my iSCSI SAN.
2. Marked the LUN for use by sbd, using sbd
-d /dev/disk/by-id/scsi-<longnumber>
3. Created a clone resource that looks as follows:
<clone id="sbd-clone">
<meta_attributes id="sbd-clone-meta_attributes">
<nvpair id="sbd-clone-meta_attributes-clone-max"
name="clone-max" value="2"/>
<nvpair id="sbd-clone-meta_attributes-target-role"
name="target-role" value="Started"/>
</meta_attributes>
<primitive class="stonith" id="sbd" type="external/sbd">
<operations id="sbd-operations">
<op id="sbd-op-monitor-15" interval="15" name="monitor"
start-delay="15" timeout="15"/>
</operations>
<instance_attributes id="sbd-instance_attributes">
<nvpair id="sbd-instance_attributes-sbd_device"
name="sbd_device"
value="/dev/disk/by-id/scsi-1494554000000000000000000030000002a0600000f000000"/>
</instance_attributes>
</primitive>
</clone>
An interesting issue is that sbd -d /dev/dm-0 list gave me information
like:
0 xen2 reset xen1
1 xen1 reset xen2
Which looks like they are trying to do a STONITH shootout? So I cleared
that information, using sbd -d /dev/dm-0 <nodenames> clear which looks
useful, but didn't fix the issue. (Neither did a bold sbd -d /dev/dm-0
create).
Attached the last part of my messages file for more information as well.
Would appreciate if someone can put me on the right track with this.
Thanks,
Sander
-------------- next part --------------
eb 24 20:40:45 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: native_color: Resource dlm:1 cannot run anywhere
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: native_color: Resource clvm:1 cannot run anywhere
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:40:45 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:40:45 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:40:45 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:40:45 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen2
Feb 24 20:40:45 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:40:45 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:40:45 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:40:45 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: stage6: Scheduling Node xen1 for STONITH
Feb 24 20:40:45 xen2 pengine: [5536]: info: native_start_constraints: Ordering dlm:0_start_0 after xen1 recovery
Feb 24 20:40:45 xen2 pengine: [5536]: info: native_start_constraints: Ordering clvm:0_start_0 after xen1 recovery
Feb 24 20:40:45 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM1_start_0 after xen1 recovery
Feb 24 20:40:45 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM2_start_0 after xen1 recovery
Feb 24 20:40:45 xen2 pengine: [5536]: info: native_start_constraints: Ordering vm1_start_0 after xen1 recovery
Feb 24 20:40:45 xen2 pengine: [5536]: info: native_start_constraints: Ordering o2cb:0_start_0 after xen1 recovery
Feb 24 20:40:45 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2data:0_start_0 after xen1 recovery
Feb 24 20:40:45 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2-config:0_start_0 after xen1 recovery
Feb 24 20:40:45 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Leave resource dlm:1 (Stopped)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Leave resource clvm:1 (Stopped)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen2)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Leave resource o2cb:1 (Stopped)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:40:45 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:40:45 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:40:45 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 23: 34 actions in 34 synapses
Feb 24 20:40:45 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 23 (ref=pe_calc-dc-1267040445-44) derived from /var/lib/pengine/pe-warn-26.bz2
Feb 24 20:40:45 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:40:45 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 24 20:40:45 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 24 20:40:45 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Feb 24 20:40:45 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:40:45 xen2 crmd: [5537]: info: te_fence_node: Executing reboot fencing operation (47) on xen1 (timeout=20000)
Feb 24 20:40:45 xen2 stonithd: [5532]: info: client tengine [pid: 5537] requests a STONITH operation RESET on node xen1
Feb 24 20:40:45 xen2 stonithd: [5532]: info: we can't manage xen1, broadcast request to other nodes
Feb 24 20:40:45 xen2 stonithd: [5532]: info: Broadcasting the message succeeded: require others to stonith node xen1.
Feb 24 20:40:45 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:40:45 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:40:45 xen2 pengine: [5536]: WARN: process_pe_message: Transition 23: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-26.bz2
Feb 24 20:40:45 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:41:05 xen2 stonithd: [5532]: ERROR: Failed to STONITH the node xen1: optype=RESET, op_result=TIMEOUT
Feb 24 20:41:05 xen2 crmd: [5537]: info: tengine_stonith_callback: call=-22, optype=1, node_name=xen1, result=2, node_list=, action=47:23:0:e6c42e56-088a-4674-b420-201efc520279
Feb 24 20:41:05 xen2 crmd: [5537]: ERROR: tengine_stonith_callback: Stonith of xen1 failed (2)... aborting transition.
Feb 24 20:41:05 xen2 crmd: [5537]: info: abort_transition_graph: tengine_stonith_callback:398 - Triggered transition abort (complete=0) : Stonith failed
Feb 24 20:41:05 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:41:05 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:41:05 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:41:05 xen2 crmd: [5537]: notice: run_graph: Transition 23 (Complete=8, Pending=0, Fired=0, Skipped=20, Incomplete=6, Source=/var/lib/pengine/pe-warn-26.bz2): Stopped
Feb 24 20:41:05 xen2 crmd: [5537]: info: te_graph_trigger: Transition 23 is now complete
Feb 24 20:41:05 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:41:05 xen2 crmd: [5537]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Feb 24 20:41:05 xen2 crmd: [5537]: info: do_pe_invoke: Query 65: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:41:05 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040465-45, seq=124, quorate=0
Feb 24 20:41:05 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:41:05 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:41:05 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:41:05 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:41:05 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:41:05 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:41:05 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:41:05 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:41:05 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen2 FAILED
Feb 24 20:41:05 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:41:05 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:41:05 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:41:05 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:41:05 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:41:05 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:41:05 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:41:05 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:41:05 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:41:05 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:41:05 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:41:05 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:41:05 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:41:05 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:41:05 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:41:05 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: native_color: Resource dlm:1 cannot run anywhere
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: native_color: Resource clvm:1 cannot run anywhere
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:41:05 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:41:05 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:41:05 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:41:05 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen2
Feb 24 20:41:05 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:41:05 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:41:05 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:41:05 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:41:05 xen2 pengine: [5536]: WARN: stage6: Scheduling Node xen1 for STONITH
Feb 24 20:41:05 xen2 pengine: [5536]: info: native_start_constraints: Ordering dlm:0_start_0 after xen1 recovery
Feb 24 20:41:05 xen2 pengine: [5536]: info: native_start_constraints: Ordering clvm:0_start_0 after xen1 recovery
Feb 24 20:41:05 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM1_start_0 after xen1 recovery
Feb 24 20:41:05 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM2_start_0 after xen1 recovery
Feb 24 20:41:06 xen2 pengine: [5536]: info: native_start_constraints: Ordering vm1_start_0 after xen1 recovery
Feb 24 20:41:06 xen2 pengine: [5536]: info: native_start_constraints: Ordering o2cb:0_start_0 after xen1 recovery
Feb 24 20:41:06 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2data:0_start_0 after xen1 recovery
Feb 24 20:41:06 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2-config:0_start_0 after xen1 recovery
Feb 24 20:41:06 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Leave resource dlm:1 (Stopped)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Leave resource clvm:1 (Stopped)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen2)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Leave resource o2cb:1 (Stopped)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:41:06 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:41:06 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:41:06 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 24: 34 actions in 34 synapses
Feb 24 20:41:06 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 24 (ref=pe_calc-dc-1267040465-45) derived from /var/lib/pengine/pe-warn-27.bz2
Feb 24 20:41:06 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:41:06 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 24 20:41:06 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 24 20:41:06 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Feb 24 20:41:06 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:41:06 xen2 crmd: [5537]: info: te_fence_node: Executing reboot fencing operation (47) on xen1 (timeout=20000)
Feb 24 20:41:06 xen2 stonithd: [5532]: info: client tengine [pid: 5537] requests a STONITH operation RESET on node xen1
Feb 24 20:41:06 xen2 stonithd: [5532]: info: we can't manage xen1, broadcast request to other nodes
Feb 24 20:41:06 xen2 stonithd: [5532]: info: Broadcasting the message succeeded: require others to stonith node xen1.
Feb 24 20:41:06 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:41:06 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:41:06 xen2 pengine: [5536]: WARN: process_pe_message: Transition 24: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-27.bz2
Feb 24 20:41:06 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:41:26 xen2 stonithd: [5532]: ERROR: Failed to STONITH the node xen1: optype=RESET, op_result=TIMEOUT
Feb 24 20:41:26 xen2 crmd: [5537]: info: tengine_stonith_callback: call=-23, optype=1, node_name=xen1, result=2, node_list=, action=47:24:0:e6c42e56-088a-4674-b420-201efc520279
Feb 24 20:41:26 xen2 crmd: [5537]: ERROR: tengine_stonith_callback: Stonith of xen1 failed (2)... aborting transition.
Feb 24 20:41:26 xen2 crmd: [5537]: info: abort_transition_graph: tengine_stonith_callback:398 - Triggered transition abort (complete=0) : Stonith failed
Feb 24 20:41:26 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:41:26 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:41:26 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:41:26 xen2 crmd: [5537]: notice: run_graph: Transition 24 (Complete=8, Pending=0, Fired=0, Skipped=20, Incomplete=6, Source=/var/lib/pengine/pe-warn-27.bz2): Stopped
Feb 24 20:41:26 xen2 crmd: [5537]: info: te_graph_trigger: Transition 24 is now complete
Feb 24 20:41:26 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:41:26 xen2 crmd: [5537]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Feb 24 20:41:26 xen2 crmd: [5537]: info: do_pe_invoke: Query 66: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:41:26 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040486-46, seq=124, quorate=0
Feb 24 20:41:26 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:41:26 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:41:26 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:41:26 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:41:26 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:41:26 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:41:26 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:41:26 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen2 FAILED
Feb 24 20:41:26 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:41:26 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:41:26 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:41:26 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:41:26 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:41:26 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:41:26 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:41:26 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:41:26 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:41:26 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:41:26 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:41:26 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:41:26 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:41:26 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:41:26 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: native_color: Resource dlm:1 cannot run anywhere
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: native_color: Resource clvm:1 cannot run anywhere
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:41:26 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:41:26 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:41:26 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:41:26 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen2
Feb 24 20:41:26 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:41:26 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:41:26 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:41:26 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: stage6: Scheduling Node xen1 for STONITH
Feb 24 20:41:26 xen2 pengine: [5536]: info: native_start_constraints: Ordering dlm:0_start_0 after xen1 recovery
Feb 24 20:41:26 xen2 pengine: [5536]: info: native_start_constraints: Ordering clvm:0_start_0 after xen1 recovery
Feb 24 20:41:26 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM1_start_0 after xen1 recovery
Feb 24 20:41:26 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM2_start_0 after xen1 recovery
Feb 24 20:41:26 xen2 pengine: [5536]: info: native_start_constraints: Ordering vm1_start_0 after xen1 recovery
Feb 24 20:41:26 xen2 pengine: [5536]: info: native_start_constraints: Ordering o2cb:0_start_0 after xen1 recovery
Feb 24 20:41:26 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2data:0_start_0 after xen1 recovery
Feb 24 20:41:26 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2-config:0_start_0 after xen1 recovery
Feb 24 20:41:26 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Leave resource dlm:1 (Stopped)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Leave resource clvm:1 (Stopped)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen2)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Leave resource o2cb:1 (Stopped)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:41:26 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:41:26 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:41:26 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 25: 34 actions in 34 synapses
Feb 24 20:41:26 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 25 (ref=pe_calc-dc-1267040486-46) derived from /var/lib/pengine/pe-warn-28.bz2
Feb 24 20:41:26 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:41:26 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 24 20:41:26 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 24 20:41:26 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Feb 24 20:41:26 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:41:26 xen2 crmd: [5537]: info: te_fence_node: Executing reboot fencing operation (47) on xen1 (timeout=20000)
Feb 24 20:41:26 xen2 stonithd: [5532]: info: client tengine [pid: 5537] requests a STONITH operation RESET on node xen1
Feb 24 20:41:26 xen2 stonithd: [5532]: info: we can't manage xen1, broadcast request to other nodes
Feb 24 20:41:26 xen2 stonithd: [5532]: info: Broadcasting the message succeeded: require others to stonith node xen1.
Feb 24 20:41:26 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:41:26 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:41:26 xen2 pengine: [5536]: WARN: process_pe_message: Transition 25: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-28.bz2
Feb 24 20:41:26 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:41:46 xen2 stonithd: [5532]: ERROR: Failed to STONITH the node xen1: optype=RESET, op_result=TIMEOUT
Feb 24 20:41:46 xen2 crmd: [5537]: info: tengine_stonith_callback: call=-24, optype=1, node_name=xen1, result=2, node_list=, action=47:25:0:e6c42e56-088a-4674-b420-201efc520279
Feb 24 20:41:46 xen2 crmd: [5537]: ERROR: tengine_stonith_callback: Stonith of xen1 failed (2)... aborting transition.
Feb 24 20:41:46 xen2 crmd: [5537]: info: abort_transition_graph: tengine_stonith_callback:398 - Triggered transition abort (complete=0) : Stonith failed
Feb 24 20:41:46 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:41:46 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:41:46 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:41:46 xen2 crmd: [5537]: notice: run_graph: Transition 25 (Complete=8, Pending=0, Fired=0, Skipped=20, Incomplete=6, Source=/var/lib/pengine/pe-warn-28.bz2): Stopped
Feb 24 20:41:46 xen2 crmd: [5537]: info: te_graph_trigger: Transition 25 is now complete
Feb 24 20:41:46 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:41:46 xen2 crmd: [5537]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Feb 24 20:41:46 xen2 crmd: [5537]: info: do_pe_invoke: Query 67: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:41:46 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040506-47, seq=124, quorate=0
Feb 24 20:41:46 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:41:46 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:41:46 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:41:46 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:41:46 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:41:46 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:41:46 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:41:46 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:41:46 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen2 FAILED
Feb 24 20:41:46 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:41:46 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:41:46 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:41:46 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:41:46 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:41:46 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:41:46 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:41:46 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:41:46 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:41:46 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:41:46 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:41:46 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:41:46 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:41:46 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:41:46 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:41:46 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: native_color: Resource dlm:1 cannot run anywhere
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: native_color: Resource clvm:1 cannot run anywhere
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:41:46 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:41:46 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:41:46 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:41:46 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen2
Feb 24 20:41:46 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:41:46 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:41:46 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:41:46 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:41:46 xen2 pengine: [5536]: WARN: stage6: Scheduling Node xen1 for STONITH
Feb 24 20:41:46 xen2 pengine: [5536]: info: native_start_constraints: Ordering dlm:0_start_0 after xen1 recovery
Feb 24 20:41:46 xen2 pengine: [5536]: info: native_start_constraints: Ordering clvm:0_start_0 after xen1 recovery
Feb 24 20:41:46 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM1_start_0 after xen1 recovery
Feb 24 20:41:46 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM2_start_0 after xen1 recovery
Feb 24 20:41:47 xen2 pengine: [5536]: info: native_start_constraints: Ordering vm1_start_0 after xen1 recovery
Feb 24 20:41:47 xen2 pengine: [5536]: info: native_start_constraints: Ordering o2cb:0_start_0 after xen1 recovery
Feb 24 20:41:47 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2data:0_start_0 after xen1 recovery
Feb 24 20:41:47 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2-config:0_start_0 after xen1 recovery
Feb 24 20:41:47 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Leave resource dlm:1 (Stopped)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Leave resource clvm:1 (Stopped)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen2)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Leave resource o2cb:1 (Stopped)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:41:47 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:41:47 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:41:47 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 26: 34 actions in 34 synapses
Feb 24 20:41:47 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 26 (ref=pe_calc-dc-1267040506-47) derived from /var/lib/pengine/pe-warn-29.bz2
Feb 24 20:41:47 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:41:47 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 24 20:41:47 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 24 20:41:47 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Feb 24 20:41:47 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:41:47 xen2 crmd: [5537]: info: te_fence_node: Executing reboot fencing operation (47) on xen1 (timeout=20000)
Feb 24 20:41:47 xen2 stonithd: [5532]: info: client tengine [pid: 5537] requests a STONITH operation RESET on node xen1
Feb 24 20:41:47 xen2 stonithd: [5532]: info: we can't manage xen1, broadcast request to other nodes
Feb 24 20:41:47 xen2 stonithd: [5532]: info: Broadcasting the message succeeded: require others to stonith node xen1.
Feb 24 20:41:47 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:41:47 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:41:47 xen2 pengine: [5536]: WARN: process_pe_message: Transition 26: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-29.bz2
Feb 24 20:41:47 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:42:07 xen2 stonithd: [5532]: ERROR: Failed to STONITH the node xen1: optype=RESET, op_result=TIMEOUT
Feb 24 20:42:07 xen2 crmd: [5537]: info: tengine_stonith_callback: call=-25, optype=1, node_name=xen1, result=2, node_list=, action=47:26:0:e6c42e56-088a-4674-b420-201efc520279
Feb 24 20:42:07 xen2 crmd: [5537]: ERROR: tengine_stonith_callback: Stonith of xen1 failed (2)... aborting transition.
Feb 24 20:42:07 xen2 crmd: [5537]: info: abort_transition_graph: tengine_stonith_callback:398 - Triggered transition abort (complete=0) : Stonith failed
Feb 24 20:42:07 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:42:07 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:42:07 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:42:07 xen2 crmd: [5537]: notice: run_graph: Transition 26 (Complete=8, Pending=0, Fired=0, Skipped=20, Incomplete=6, Source=/var/lib/pengine/pe-warn-29.bz2): Stopped
Feb 24 20:42:07 xen2 crmd: [5537]: info: te_graph_trigger: Transition 26 is now complete
Feb 24 20:42:07 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:42:07 xen2 crmd: [5537]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Feb 24 20:42:07 xen2 crmd: [5537]: info: do_pe_invoke: Query 68: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:42:07 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040527-48, seq=124, quorate=0
Feb 24 20:42:07 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:42:07 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:42:07 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:42:07 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:42:07 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:42:07 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:42:07 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:42:07 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen2 FAILED
Feb 24 20:42:07 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:42:07 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:42:07 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:42:07 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:42:07 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:42:07 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:42:07 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:42:07 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:42:07 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:42:07 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:42:07 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:42:07 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:42:07 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:42:07 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:42:07 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: native_color: Resource dlm:1 cannot run anywhere
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: native_color: Resource clvm:1 cannot run anywhere
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:42:07 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:42:07 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:42:07 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:42:07 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen2
Feb 24 20:42:07 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:42:07 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:42:07 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:42:07 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: stage6: Scheduling Node xen1 for STONITH
Feb 24 20:42:07 xen2 pengine: [5536]: info: native_start_constraints: Ordering dlm:0_start_0 after xen1 recovery
Feb 24 20:42:07 xen2 pengine: [5536]: info: native_start_constraints: Ordering clvm:0_start_0 after xen1 recovery
Feb 24 20:42:07 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM1_start_0 after xen1 recovery
Feb 24 20:42:07 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM2_start_0 after xen1 recovery
Feb 24 20:42:07 xen2 pengine: [5536]: info: native_start_constraints: Ordering vm1_start_0 after xen1 recovery
Feb 24 20:42:07 xen2 pengine: [5536]: info: native_start_constraints: Ordering o2cb:0_start_0 after xen1 recovery
Feb 24 20:42:07 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2data:0_start_0 after xen1 recovery
Feb 24 20:42:07 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2-config:0_start_0 after xen1 recovery
Feb 24 20:42:07 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Leave resource dlm:1 (Stopped)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Leave resource clvm:1 (Stopped)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen2)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Leave resource o2cb:1 (Stopped)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:42:07 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:42:07 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:42:07 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 27: 34 actions in 34 synapses
Feb 24 20:42:07 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 27 (ref=pe_calc-dc-1267040527-48) derived from /var/lib/pengine/pe-warn-30.bz2
Feb 24 20:42:07 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:42:07 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 24 20:42:07 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 24 20:42:07 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Feb 24 20:42:07 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:42:07 xen2 crmd: [5537]: info: te_fence_node: Executing reboot fencing operation (47) on xen1 (timeout=20000)
Feb 24 20:42:07 xen2 stonithd: [5532]: info: client tengine [pid: 5537] requests a STONITH operation RESET on node xen1
Feb 24 20:42:07 xen2 stonithd: [5532]: info: we can't manage xen1, broadcast request to other nodes
Feb 24 20:42:07 xen2 stonithd: [5532]: info: Broadcasting the message succeeded: require others to stonith node xen1.
Feb 24 20:42:07 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:42:07 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:42:07 xen2 pengine: [5536]: WARN: process_pe_message: Transition 27: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-30.bz2
Feb 24 20:42:07 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:42:27 xen2 stonithd: [5532]: ERROR: Failed to STONITH the node xen1: optype=RESET, op_result=TIMEOUT
Feb 24 20:42:27 xen2 crmd: [5537]: info: tengine_stonith_callback: call=-26, optype=1, node_name=xen1, result=2, node_list=, action=47:27:0:e6c42e56-088a-4674-b420-201efc520279
Feb 24 20:42:27 xen2 crmd: [5537]: ERROR: tengine_stonith_callback: Stonith of xen1 failed (2)... aborting transition.
Feb 24 20:42:27 xen2 crmd: [5537]: info: abort_transition_graph: tengine_stonith_callback:398 - Triggered transition abort (complete=0) : Stonith failed
Feb 24 20:42:27 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:42:27 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:42:27 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:42:27 xen2 crmd: [5537]: notice: run_graph: Transition 27 (Complete=8, Pending=0, Fired=0, Skipped=20, Incomplete=6, Source=/var/lib/pengine/pe-warn-30.bz2): Stopped
Feb 24 20:42:27 xen2 crmd: [5537]: info: te_graph_trigger: Transition 27 is now complete
Feb 24 20:42:27 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:42:27 xen2 crmd: [5537]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Feb 24 20:42:27 xen2 crmd: [5537]: info: do_pe_invoke: Query 69: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:42:27 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040547-49, seq=124, quorate=0
Feb 24 20:42:27 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:42:27 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:42:27 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:42:27 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:42:27 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:42:27 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:42:27 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:42:27 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen2 FAILED
Feb 24 20:42:27 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:42:27 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:42:27 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:42:27 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:42:27 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:42:27 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:42:27 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:42:27 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:42:27 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:42:27 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:42:27 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:42:27 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:42:27 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:42:27 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:42:27 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: native_color: Resource dlm:1 cannot run anywhere
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: native_color: Resource clvm:1 cannot run anywhere
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:42:27 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:42:27 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:42:27 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:42:27 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen2
Feb 24 20:42:27 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:42:27 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:42:27 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:42:27 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:42:27 xen2 pengine: [5536]: WARN: stage6: Scheduling Node xen1 for STONITH
Feb 24 20:42:27 xen2 pengine: [5536]: info: native_start_constraints: Ordering dlm:0_start_0 after xen1 recovery
Feb 24 20:42:27 xen2 pengine: [5536]: info: native_start_constraints: Ordering clvm:0_start_0 after xen1 recovery
Feb 24 20:42:27 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM1_start_0 after xen1 recovery
Feb 24 20:42:27 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM2_start_0 after xen1 recovery
Feb 24 20:42:27 xen2 pengine: [5536]: info: native_start_constraints: Ordering vm1_start_0 after xen1 recovery
Feb 24 20:42:27 xen2 pengine: [5536]: info: native_start_constraints: Ordering o2cb:0_start_0 after xen1 recovery
Feb 24 20:42:27 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2data:0_start_0 after xen1 recovery
Feb 24 20:42:27 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2-config:0_start_0 after xen1 recovery
Feb 24 20:42:27 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Leave resource dlm:1 (Stopped)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Leave resource clvm:1 (Stopped)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen2)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Leave resource o2cb:1 (Stopped)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:42:27 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:42:27 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:42:27 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 28: 34 actions in 34 synapses
Feb 24 20:42:27 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 28 (ref=pe_calc-dc-1267040547-49) derived from /var/lib/pengine/pe-warn-31.bz2
Feb 24 20:42:27 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:42:27 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 24 20:42:27 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 24 20:42:27 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Feb 24 20:42:27 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:42:27 xen2 crmd: [5537]: info: te_fence_node: Executing reboot fencing operation (47) on xen1 (timeout=20000)
Feb 24 20:42:27 xen2 stonithd: [5532]: info: client tengine [pid: 5537] requests a STONITH operation RESET on node xen1
Feb 24 20:42:27 xen2 stonithd: [5532]: info: we can't manage xen1, broadcast request to other nodes
Feb 24 20:42:27 xen2 stonithd: [5532]: info: Broadcasting the message succeeded: require others to stonith node xen1.
Feb 24 20:42:27 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:42:27 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:42:28 xen2 pengine: [5536]: WARN: process_pe_message: Transition 28: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-31.bz2
Feb 24 20:42:28 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:42:47 xen2 stonithd: [5532]: ERROR: Failed to STONITH the node xen1: optype=RESET, op_result=TIMEOUT
Feb 24 20:42:47 xen2 crmd: [5537]: info: tengine_stonith_callback: call=-27, optype=1, node_name=xen1, result=2, node_list=, action=47:28:0:e6c42e56-088a-4674-b420-201efc520279
Feb 24 20:42:47 xen2 crmd: [5537]: ERROR: tengine_stonith_callback: Stonith of xen1 failed (2)... aborting transition.
Feb 24 20:42:47 xen2 crmd: [5537]: info: abort_transition_graph: tengine_stonith_callback:398 - Triggered transition abort (complete=0) : Stonith failed
Feb 24 20:42:47 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:42:47 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:42:47 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:42:47 xen2 crmd: [5537]: notice: run_graph: Transition 28 (Complete=8, Pending=0, Fired=0, Skipped=20, Incomplete=6, Source=/var/lib/pengine/pe-warn-31.bz2): Stopped
Feb 24 20:42:47 xen2 crmd: [5537]: info: te_graph_trigger: Transition 28 is now complete
Feb 24 20:42:47 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:42:47 xen2 crmd: [5537]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Feb 24 20:42:47 xen2 crmd: [5537]: info: do_pe_invoke: Query 70: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:42:48 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040567-50, seq=124, quorate=0
Feb 24 20:42:48 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:42:48 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:42:48 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:42:48 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:42:48 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:42:48 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:42:48 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:42:48 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen2 FAILED
Feb 24 20:42:48 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:42:48 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:42:48 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:42:48 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:42:48 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:42:48 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:42:48 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:42:48 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:42:48 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:42:48 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:42:48 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:42:48 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:42:48 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:42:48 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:42:48 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: native_color: Resource dlm:1 cannot run anywhere
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: native_color: Resource clvm:1 cannot run anywhere
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:42:48 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:42:48 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:42:48 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:42:48 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen2
Feb 24 20:42:48 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:42:48 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:42:48 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:42:48 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: stage6: Scheduling Node xen1 for STONITH
Feb 24 20:42:48 xen2 pengine: [5536]: info: native_start_constraints: Ordering dlm:0_start_0 after xen1 recovery
Feb 24 20:42:48 xen2 pengine: [5536]: info: native_start_constraints: Ordering clvm:0_start_0 after xen1 recovery
Feb 24 20:42:48 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM1_start_0 after xen1 recovery
Feb 24 20:42:48 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM2_start_0 after xen1 recovery
Feb 24 20:42:48 xen2 pengine: [5536]: info: native_start_constraints: Ordering vm1_start_0 after xen1 recovery
Feb 24 20:42:48 xen2 pengine: [5536]: info: native_start_constraints: Ordering o2cb:0_start_0 after xen1 recovery
Feb 24 20:42:48 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2data:0_start_0 after xen1 recovery
Feb 24 20:42:48 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2-config:0_start_0 after xen1 recovery
Feb 24 20:42:48 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Leave resource dlm:1 (Stopped)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Leave resource clvm:1 (Stopped)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen2)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Leave resource o2cb:1 (Stopped)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:42:48 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:42:48 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:42:48 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 29: 34 actions in 34 synapses
Feb 24 20:42:48 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 29 (ref=pe_calc-dc-1267040567-50) derived from /var/lib/pengine/pe-warn-32.bz2
Feb 24 20:42:48 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:42:48 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 24 20:42:48 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 24 20:42:48 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Feb 24 20:42:48 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:42:48 xen2 crmd: [5537]: info: te_fence_node: Executing reboot fencing operation (47) on xen1 (timeout=20000)
Feb 24 20:42:48 xen2 stonithd: [5532]: info: client tengine [pid: 5537] requests a STONITH operation RESET on node xen1
Feb 24 20:42:48 xen2 stonithd: [5532]: info: we can't manage xen1, broadcast request to other nodes
Feb 24 20:42:48 xen2 stonithd: [5532]: info: Broadcasting the message succeeded: require others to stonith node xen1.
Feb 24 20:42:48 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:42:48 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:42:48 xen2 pengine: [5536]: WARN: process_pe_message: Transition 29: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-32.bz2
Feb 24 20:42:48 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:42:50 xen2 cib: [5533]: info: cib_stats: Processed 82 operations (6219.00us average, 0% utilization) in the last 10min
Feb 24 20:43:08 xen2 stonithd: [5532]: ERROR: Failed to STONITH the node xen1: optype=RESET, op_result=TIMEOUT
Feb 24 20:43:08 xen2 crmd: [5537]: info: tengine_stonith_callback: call=-28, optype=1, node_name=xen1, result=2, node_list=, action=47:29:0:e6c42e56-088a-4674-b420-201efc520279
Feb 24 20:43:08 xen2 crmd: [5537]: ERROR: tengine_stonith_callback: Stonith of xen1 failed (2)... aborting transition.
Feb 24 20:43:08 xen2 crmd: [5537]: info: abort_transition_graph: tengine_stonith_callback:398 - Triggered transition abort (complete=0) : Stonith failed
Feb 24 20:43:08 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:43:08 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:43:08 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:43:08 xen2 crmd: [5537]: notice: run_graph: Transition 29 (Complete=8, Pending=0, Fired=0, Skipped=20, Incomplete=6, Source=/var/lib/pengine/pe-warn-32.bz2): Stopped
Feb 24 20:43:08 xen2 crmd: [5537]: info: te_graph_trigger: Transition 29 is now complete
Feb 24 20:43:08 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:43:08 xen2 crmd: [5537]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Feb 24 20:43:08 xen2 crmd: [5537]: info: do_pe_invoke: Query 71: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:43:08 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040588-51, seq=124, quorate=0
Feb 24 20:43:08 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:43:08 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:43:08 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:43:08 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:43:08 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:43:08 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:43:08 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:43:08 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen2 FAILED
Feb 24 20:43:08 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:43:08 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:43:08 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:43:08 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:43:08 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:43:08 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:43:08 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:43:08 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:43:08 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:43:08 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:43:08 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:43:08 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:43:08 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:43:08 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:43:08 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: native_color: Resource dlm:1 cannot run anywhere
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: native_color: Resource clvm:1 cannot run anywhere
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:43:08 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:43:08 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:43:08 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:43:08 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen2
Feb 24 20:43:08 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:43:08 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:43:08 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:43:08 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: stage6: Scheduling Node xen1 for STONITH
Feb 24 20:43:08 xen2 pengine: [5536]: info: native_start_constraints: Ordering dlm:0_start_0 after xen1 recovery
Feb 24 20:43:08 xen2 pengine: [5536]: info: native_start_constraints: Ordering clvm:0_start_0 after xen1 recovery
Feb 24 20:43:08 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM1_start_0 after xen1 recovery
Feb 24 20:43:08 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM2_start_0 after xen1 recovery
Feb 24 20:43:08 xen2 pengine: [5536]: info: native_start_constraints: Ordering vm1_start_0 after xen1 recovery
Feb 24 20:43:08 xen2 pengine: [5536]: info: native_start_constraints: Ordering o2cb:0_start_0 after xen1 recovery
Feb 24 20:43:08 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2data:0_start_0 after xen1 recovery
Feb 24 20:43:08 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2-config:0_start_0 after xen1 recovery
Feb 24 20:43:08 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Leave resource dlm:1 (Stopped)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Leave resource clvm:1 (Stopped)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen2)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Leave resource o2cb:1 (Stopped)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:43:08 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:43:08 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:43:08 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 30: 34 actions in 34 synapses
Feb 24 20:43:08 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 30 (ref=pe_calc-dc-1267040588-51) derived from /var/lib/pengine/pe-warn-33.bz2
Feb 24 20:43:08 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:43:08 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 24 20:43:08 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 24 20:43:08 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Feb 24 20:43:08 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:43:08 xen2 crmd: [5537]: info: te_fence_node: Executing reboot fencing operation (47) on xen1 (timeout=20000)
Feb 24 20:43:08 xen2 stonithd: [5532]: info: client tengine [pid: 5537] requests a STONITH operation RESET on node xen1
Feb 24 20:43:08 xen2 stonithd: [5532]: info: we can't manage xen1, broadcast request to other nodes
Feb 24 20:43:08 xen2 stonithd: [5532]: info: Broadcasting the message succeeded: require others to stonith node xen1.
Feb 24 20:43:08 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:43:08 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:43:08 xen2 pengine: [5536]: WARN: process_pe_message: Transition 30: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-33.bz2
Feb 24 20:43:08 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:43:28 xen2 stonithd: [5532]: ERROR: Failed to STONITH the node xen1: optype=RESET, op_result=TIMEOUT
Feb 24 20:43:28 xen2 crmd: [5537]: info: tengine_stonith_callback: call=-29, optype=1, node_name=xen1, result=2, node_list=, action=47:30:0:e6c42e56-088a-4674-b420-201efc520279
Feb 24 20:43:28 xen2 crmd: [5537]: ERROR: tengine_stonith_callback: Stonith of xen1 failed (2)... aborting transition.
Feb 24 20:43:28 xen2 crmd: [5537]: info: abort_transition_graph: tengine_stonith_callback:398 - Triggered transition abort (complete=0) : Stonith failed
Feb 24 20:43:28 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:43:28 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:43:28 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:43:28 xen2 crmd: [5537]: notice: run_graph: Transition 30 (Complete=8, Pending=0, Fired=0, Skipped=20, Incomplete=6, Source=/var/lib/pengine/pe-warn-33.bz2): Stopped
Feb 24 20:43:28 xen2 crmd: [5537]: info: te_graph_trigger: Transition 30 is now complete
Feb 24 20:43:28 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:43:28 xen2 crmd: [5537]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Feb 24 20:43:28 xen2 crmd: [5537]: info: do_pe_invoke: Query 72: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:43:28 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040608-52, seq=124, quorate=0
Feb 24 20:43:28 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:43:28 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:43:28 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:43:28 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:43:28 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:43:28 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:43:28 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:43:28 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:43:28 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen2 FAILED
Feb 24 20:43:28 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:43:28 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:43:28 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:43:28 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:43:28 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:43:28 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:43:28 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:43:28 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:43:28 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:43:28 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:43:28 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:43:28 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:43:28 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:43:28 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:43:28 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:43:28 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: native_color: Resource dlm:1 cannot run anywhere
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: native_color: Resource clvm:1 cannot run anywhere
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:43:28 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:43:28 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:43:28 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:43:28 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:43:29 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen2
Feb 24 20:43:29 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:43:29 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:43:29 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:43:29 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:43:29 xen2 pengine: [5536]: WARN: stage6: Scheduling Node xen1 for STONITH
Feb 24 20:43:29 xen2 pengine: [5536]: info: native_start_constraints: Ordering dlm:0_start_0 after xen1 recovery
Feb 24 20:43:29 xen2 pengine: [5536]: info: native_start_constraints: Ordering clvm:0_start_0 after xen1 recovery
Feb 24 20:43:29 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM1_start_0 after xen1 recovery
Feb 24 20:43:29 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM2_start_0 after xen1 recovery
Feb 24 20:43:29 xen2 pengine: [5536]: info: native_start_constraints: Ordering vm1_start_0 after xen1 recovery
Feb 24 20:43:29 xen2 pengine: [5536]: info: native_start_constraints: Ordering o2cb:0_start_0 after xen1 recovery
Feb 24 20:43:29 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2data:0_start_0 after xen1 recovery
Feb 24 20:43:29 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2-config:0_start_0 after xen1 recovery
Feb 24 20:43:29 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Leave resource dlm:1 (Stopped)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Leave resource clvm:1 (Stopped)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen2)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Leave resource o2cb:1 (Stopped)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:43:29 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:43:29 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:43:29 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 31: 34 actions in 34 synapses
Feb 24 20:43:29 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 31 (ref=pe_calc-dc-1267040608-52) derived from /var/lib/pengine/pe-warn-34.bz2
Feb 24 20:43:29 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:43:29 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 24 20:43:29 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 24 20:43:29 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Feb 24 20:43:29 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:43:29 xen2 crmd: [5537]: info: te_fence_node: Executing reboot fencing operation (47) on xen1 (timeout=20000)
Feb 24 20:43:29 xen2 stonithd: [5532]: info: client tengine [pid: 5537] requests a STONITH operation RESET on node xen1
Feb 24 20:43:29 xen2 stonithd: [5532]: info: we can't manage xen1, broadcast request to other nodes
Feb 24 20:43:29 xen2 stonithd: [5532]: info: Broadcasting the message succeeded: require others to stonith node xen1.
Feb 24 20:43:29 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:43:29 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:43:29 xen2 pengine: [5536]: WARN: process_pe_message: Transition 31: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-34.bz2
Feb 24 20:43:29 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:43:49 xen2 stonithd: [5532]: ERROR: Failed to STONITH the node xen1: optype=RESET, op_result=TIMEOUT
Feb 24 20:43:49 xen2 crmd: [5537]: info: tengine_stonith_callback: call=-30, optype=1, node_name=xen1, result=2, node_list=, action=47:31:0:e6c42e56-088a-4674-b420-201efc520279
Feb 24 20:43:49 xen2 crmd: [5537]: ERROR: tengine_stonith_callback: Stonith of xen1 failed (2)... aborting transition.
Feb 24 20:43:49 xen2 crmd: [5537]: info: abort_transition_graph: tengine_stonith_callback:398 - Triggered transition abort (complete=0) : Stonith failed
Feb 24 20:43:49 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:43:49 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:43:49 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:43:49 xen2 crmd: [5537]: notice: run_graph: Transition 31 (Complete=8, Pending=0, Fired=0, Skipped=20, Incomplete=6, Source=/var/lib/pengine/pe-warn-34.bz2): Stopped
Feb 24 20:43:49 xen2 crmd: [5537]: info: te_graph_trigger: Transition 31 is now complete
Feb 24 20:43:49 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:43:49 xen2 crmd: [5537]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Feb 24 20:43:49 xen2 crmd: [5537]: info: do_pe_invoke: Query 73: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:43:49 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040629-53, seq=124, quorate=0
Feb 24 20:43:49 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:43:49 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:43:49 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:43:49 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:43:49 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:43:49 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:43:49 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:43:49 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen2 FAILED
Feb 24 20:43:49 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:43:49 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:43:49 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:43:49 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:43:49 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:43:49 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:43:49 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:43:49 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:43:49 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:43:49 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:43:49 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:43:49 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:43:49 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:43:49 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:43:49 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: native_color: Resource dlm:1 cannot run anywhere
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: native_color: Resource clvm:1 cannot run anywhere
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:43:49 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:43:49 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:43:49 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:43:49 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen2
Feb 24 20:43:49 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:43:49 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:43:49 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:43:49 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: stage6: Scheduling Node xen1 for STONITH
Feb 24 20:43:49 xen2 pengine: [5536]: info: native_start_constraints: Ordering dlm:0_start_0 after xen1 recovery
Feb 24 20:43:49 xen2 pengine: [5536]: info: native_start_constraints: Ordering clvm:0_start_0 after xen1 recovery
Feb 24 20:43:49 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM1_start_0 after xen1 recovery
Feb 24 20:43:49 xen2 pengine: [5536]: info: native_start_constraints: Ordering LVMforVM2_start_0 after xen1 recovery
Feb 24 20:43:49 xen2 pengine: [5536]: info: native_start_constraints: Ordering vm1_start_0 after xen1 recovery
Feb 24 20:43:49 xen2 pengine: [5536]: info: native_start_constraints: Ordering o2cb:0_start_0 after xen1 recovery
Feb 24 20:43:49 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2data:0_start_0 after xen1 recovery
Feb 24 20:43:49 xen2 pengine: [5536]: info: native_start_constraints: Ordering ocfs2-config:0_start_0 after xen1 recovery
Feb 24 20:43:49 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Leave resource dlm:1 (Stopped)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Leave resource clvm:1 (Stopped)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen2)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Leave resource o2cb:1 (Stopped)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:43:49 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:43:49 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:43:49 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 32: 34 actions in 34 synapses
Feb 24 20:43:49 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 32 (ref=pe_calc-dc-1267040629-53) derived from /var/lib/pengine/pe-warn-35.bz2
Feb 24 20:43:49 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:43:49 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 24 20:43:49 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 24 20:43:49 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Feb 24 20:43:49 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:43:49 xen2 crmd: [5537]: info: te_fence_node: Executing reboot fencing operation (47) on xen1 (timeout=20000)
Feb 24 20:43:49 xen2 stonithd: [5532]: info: client tengine [pid: 5537] requests a STONITH operation RESET on node xen1
Feb 24 20:43:49 xen2 stonithd: [5532]: info: we can't manage xen1, broadcast request to other nodes
Feb 24 20:43:49 xen2 stonithd: [5532]: info: Broadcasting the message succeeded: require others to stonith node xen1.
Feb 24 20:43:49 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:43:49 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:43:49 xen2 pengine: [5536]: WARN: process_pe_message: Transition 32: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-35.bz2
Feb 24 20:43:49 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:44:03 xen2 openais[5522]: [TOTEM] entering GATHER state from 11.
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] Saving state aru 62 high seq received 62
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] Storing new sequence id for ring 80
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] entering COMMIT state.
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] entering RECOVERY state.
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] position [0] member 192.168.1.110:
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] previous ring seq 124 rep 192.168.1.110
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] aru c high delivered c received flag 1
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] position [1] member 192.168.1.112:
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] previous ring seq 124 rep 192.168.1.112
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] aru 62 high delivered 62 received flag 1
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] Did not need to originate any messages in recovery.
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] CLM CONFIGURATION CHANGE
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] New Configuration:
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] r(0) ip(192.168.1.112)
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] Members Left:
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] Members Joined:
Feb 24 20:44:05 xen2 openais[5522]: [crm ] notice: pcmk_peer_update: Transitional membership event on ring 128: memb=1, new=0, lost=0
Feb 24 20:44:05 xen2 openais[5522]: [crm ] info: pcmk_peer_update: memb: xen2 112
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] CLM CONFIGURATION CHANGE
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] New Configuration:
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] r(0) ip(192.168.1.110)
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] r(0) ip(192.168.1.112)
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] Members Left:
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] Members Joined:
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] r(0) ip(192.168.1.110)
Feb 24 20:44:05 xen2 openais[5522]: [crm ] notice: pcmk_peer_update: Stable membership event on ring 128: memb=2, new=1, lost=0
Feb 24 20:44:05 xen2 openais[5522]: [MAIN ] info: update_member: Creating entry for node 110 born on 128
Feb 24 20:44:05 xen2 openais[5522]: [MAIN ] info: update_member: Node 110/unknown is now: member
Feb 24 20:44:05 xen2 openais[5522]: [crm ] info: pcmk_peer_update: NEW: .pending. 110
Feb 24 20:44:05 xen2 openais[5522]: [crm ] info: pcmk_peer_update: MEMB: .pending. 110
Feb 24 20:44:05 xen2 openais[5522]: [crm ] info: pcmk_peer_update: MEMB: xen2 112
Feb 24 20:44:05 xen2 crmd: [5537]: notice: ais_dispatch: Membership 128: quorum acquired
Feb 24 20:44:05 xen2 crmd: [5537]: info: crm_new_peer: Node <null> now has id: 110
Feb 24 20:44:05 xen2 openais[5522]: [crm ] info: send_member_notification: Sending membership update 128 to 2 children
Feb 24 20:44:05 xen2 crmd: [5537]: info: crm_update_peer: Node (null): id=110 state=member (new) addr=r(0) ip(192.168.1.110) votes=0 born=0 seen=128 proc=00000000000000000000000000000000
Feb 24 20:44:05 xen2 cib: [5533]: notice: ais_dispatch: Membership 128: quorum acquired
Feb 24 20:44:05 xen2 cib: [5533]: info: crm_new_peer: Node <null> now has id: 110
Feb 24 20:44:05 xen2 cib: [5533]: info: crm_update_peer: Node (null): id=110 state=member (new) addr=r(0) ip(192.168.1.110) votes=0 born=0 seen=128 proc=00000000000000000000000000000000
Feb 24 20:44:05 xen2 openais[5522]: [MAIN ] info: update_member: 0x772f60 Node 112 ((null)) born on: 128
Feb 24 20:44:05 xen2 openais[5522]: [SYNC ] This node is within the primary component and will provide service.
Feb 24 20:44:05 xen2 crmd: [5537]: info: crm_update_quorum: Updating quorum status to true (call=76)
Feb 24 20:44:05 xen2 cib: [5533]: info: ais_dispatch: Membership 128: quorum retained
Feb 24 20:44:05 xen2 cib: [5533]: info: crm_get_peer: Node 110 is now known as xen1
Feb 24 20:44:05 xen2 cib: [5533]: info: crm_update_peer: Node xen1: id=110 state=member addr=r(0) ip(192.168.1.110) votes=1 (new) born=128 seen=128 proc=00000000000000000000000000053312 (new)
Feb 24 20:44:05 xen2 openais[5522]: [TOTEM] entering OPERATIONAL state.
Feb 24 20:44:05 xen2 openais[5522]: [MAIN ] info: update_member: 0x776ed0 Node 110 (xen1) born on: 128
Feb 24 20:44:05 xen2 openais[5522]: [MAIN ] info: update_member: 0x776ed0 Node 110 now known as xen1 (was: (null))
Feb 24 20:44:05 xen2 openais[5522]: [MAIN ] info: update_member: Node xen1 now has process list: 00000000000000000000000000053312 (340754)
Feb 24 20:44:05 xen2 openais[5522]: [MAIN ] info: update_member: Node xen1 now has 1 quorum votes (was 0)
Feb 24 20:44:05 xen2 openais[5522]: [crm ] info: send_member_notification: Sending membership update 128 to 2 children
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] got nodejoin message 192.168.1.110
Feb 24 20:44:05 xen2 openais[5522]: [CLM ] got nodejoin message 192.168.1.112
Feb 24 20:44:05 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/74, version=0.87.17): ok (rc=0)
Feb 24 20:44:05 xen2 cib: [5533]: info: log_data_element: cib:diff: - <cib have-quorum="0" admin_epoch="0" epoch="87" num_updates="17" />
Feb 24 20:44:05 xen2 cib: [5533]: info: log_data_element: cib:diff: + <cib have-quorum="1" admin_epoch="0" epoch="88" num_updates="1" />
Feb 24 20:44:05 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/76, version=0.88.1): ok (rc=0)
Feb 24 20:44:05 xen2 crmd: [5537]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Feb 24 20:44:05 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:44:05 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:44:05 xen2 crmd: [5537]: info: need_abort: Aborting on change to have-quorum
Feb 24 20:44:05 xen2 crmd: [5537]: info: ais_dispatch: Membership 128: quorum retained
Feb 24 20:44:05 xen2 crmd: [5537]: info: crm_get_peer: Node 110 is now known as xen1
Feb 24 20:44:05 xen2 crmd: [5537]: info: ais_status_callback: status: xen1 is now member
Feb 24 20:44:05 xen2 crmd: [5537]: info: crm_update_peer: Node xen1: id=110 state=member addr=r(0) ip(192.168.1.110) votes=1 (new) born=128 seen=128 proc=00000000000000000000000000053312 (new)
Feb 24 20:44:05 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/78, version=0.88.1): ok (rc=0)
Feb 24 20:44:05 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/79, version=0.88.1): ok (rc=0)
Feb 24 20:44:05 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/82, version=0.88.2): ok (rc=0)
Feb 24 20:44:05 xen2 cib: [5914]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-88.raw
Feb 24 20:44:05 xen2 cib: [5914]: info: write_cib_contents: Wrote version 0.88.0 of the CIB to disk (digest: 679cd88deaaf419ad9ecd4dce214a16f)
Feb 24 20:44:05 xen2 cib: [5914]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.NGLGQZ (digest: /var/lib/heartbeat/crm/cib.5sjtaV)
Feb 24 20:44:09 xen2 stonithd: [5532]: ERROR: Failed to STONITH the node xen1: optype=RESET, op_result=TIMEOUT
Feb 24 20:44:09 xen2 crmd: [5537]: info: tengine_stonith_callback: call=-31, optype=1, node_name=xen1, result=2, node_list=, action=47:32:0:e6c42e56-088a-4674-b420-201efc520279
Feb 24 20:44:09 xen2 crmd: [5537]: ERROR: tengine_stonith_callback: Stonith of xen1 failed (2)... aborting transition.
Feb 24 20:44:09 xen2 crmd: [5537]: info: abort_transition_graph: tengine_stonith_callback:398 - Triggered transition abort (complete=0) : Stonith failed
Feb 24 20:44:09 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:44:09 xen2 crmd: [5537]: notice: run_graph: Transition 32 (Complete=8, Pending=0, Fired=0, Skipped=20, Incomplete=6, Source=/var/lib/pengine/pe-warn-35.bz2): Stopped
Feb 24 20:44:09 xen2 crmd: [5537]: info: te_graph_trigger: Transition 32 is now complete
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_state_transition: Membership changed: 124 -> 128 - join restart
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_pe_invoke: Query 83: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=do_state_transition ]
Feb 24 20:44:09 xen2 crmd: [5537]: info: update_dc: Unset DC xen2
Feb 24 20:44:09 xen2 crmd: [5537]: info: join_make_offer: Making join offers based on membership 128
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
Feb 24 20:44:09 xen2 crmd: [5537]: info: update_dc: Set DC to xen2 (3.0.1)
Feb 24 20:44:09 xen2 crmd: [5537]: info: update_dc: Unset DC xen2
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_dc_join_offer_all: A new node joined the cluster
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
Feb 24 20:44:09 xen2 crmd: [5537]: info: update_dc: Set DC to xen2 (3.0.1)
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_dc_join_finalize: join-3: Syncing the CIB from xen2 to the rest of the cluster
Feb 24 20:44:09 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/86, version=0.88.2): ok (rc=0)
Feb 24 20:44:09 xen2 attrd: [5535]: info: crm_new_peer: Node xen1 now has id: 110
Feb 24 20:44:09 xen2 attrd: [5535]: info: crm_new_peer: Node 110 is now known as xen1
Feb 24 20:44:09 xen2 lrmd: [5534]: debug: stonithRA plugin: provider attribute is not needed and will be ignored.
Feb 24 20:44:09 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/87, version=0.88.2): ok (rc=0)
Feb 24 20:44:09 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/88, version=0.88.2): ok (rc=0)
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_dc_join_ack: join-3: Updating node state to member for xen1
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_dc_join_ack: join-3: Updating node state to member for xen2
Feb 24 20:44:09 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='xen1']/transient_attributes (origin=xen1/crmd/7, version=0.88.2): ok (rc=0)
Feb 24 20:44:09 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='xen1']/lrm (origin=xen1/crmd/8, version=0.88.2): ok (rc=0)
Feb 24 20:44:09 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='xen1']/lrm (origin=local/crmd/89, version=0.88.2): ok (rc=0)
Feb 24 20:44:09 xen2 crmd: [5537]: info: erase_xpath_callback: Deletion of "//node_state[@uname='xen1']/lrm": ok (rc=0)
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Feb 24 20:44:09 xen2 crmd: [5537]: info: crm_update_quorum: Updating quorum status to true (call=95)
Feb 24 20:44:09 xen2 crmd: [5537]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=1) : Peer Cancelled
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_pe_invoke: Query 96: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:09 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='xen2']/lrm (origin=local/crmd/91, version=0.88.4): ok (rc=0)
Feb 24 20:44:09 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=DRAC-xen1_monitor_0, magic=0:7;4:1:7:e6c42e56-088a-4674-b420-201efc520279, cib=0.88.4) : Resource op removal
Feb 24 20:44:09 xen2 crmd: [5537]: info: erase_xpath_callback: Deletion of "//node_state[@uname='xen2']/lrm": ok (rc=0)
Feb 24 20:44:09 xen2 crmd: [5537]: info: do_pe_invoke: Query 97: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_update_diff: Detected LRM refresh - 12 resources updated: Skipping all resource events
Feb 24 20:44:10 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:227 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.88.5) : LRM Refresh
Feb 24 20:44:10 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/93, version=0.88.5): ok (rc=0)
Feb 24 20:44:10 xen2 crmd: [5537]: info: do_pe_invoke: Query 98: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:10 xen2 attrd: [5535]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Feb 24 20:44:10 xen2 cib: [5533]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/95, version=0.88.5): ok (rc=0)
Feb 24 20:44:10 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Feb 24 20:44:10 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-sbd:0 (1267040036)
Feb 24 20:44:10 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040650-65, seq=128, quorate=1
Feb 24 20:44:10 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:44:10 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:44:10 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-sbd:0 (INFINITY)
Feb 24 20:44:10 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:44:10 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:10 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:44:10 xen2 pengine: [5536]: info: determine_online_status: Node xen1 is online
Feb 24 20:44:10 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:44:10 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:44:10 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:44:10 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen2 FAILED
Feb 24 20:44:10 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:44:10 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:44:10 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:44:10 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:44:10 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:44:10 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:10 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:10 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:44:10 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:44:10 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Stopped
Feb 24 20:44:10 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:44:10 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:44:10 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:44:10 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:44:10 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:44:10 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:10 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:44:10 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:44:10 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (15s) for sbd:1 on xen1
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:1 on xen1
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:1 on xen1
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen1
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:1 on xen1
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:0 on xen2
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2data:1 on xen1
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:0 on xen2
Feb 24 20:44:10 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (20s) for ocfs2-config:1 on xen1
Feb 24 20:44:10 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:44:10 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:1 with dlm:1 on xen1
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen2)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start sbd:1 (xen1)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start dlm:1 (xen1)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start clvm:1 (xen1)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen1)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start o2cb:0 (xen2)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start o2cb:1 (xen1)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:0 (xen2)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start ocfs2data:1 (xen1)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:0 (xen2)
Feb 24 20:44:10 xen2 pengine: [5536]: notice: LogActions: Start ocfs2-config:1 (xen1)
Feb 24 20:44:10 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:44:10 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 33: 65 actions in 65 synapses
Feb 24 20:44:10 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 33 (ref=pe_calc-dc-1267040650-65) derived from /var/lib/pengine/pe-warn-36.bz2
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 6: monitor DRAC-xen1_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 7: monitor DRAC-xen2_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 8: monitor sbd:1_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 9: monitor dlm:1_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 10: monitor clvm:1_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 11: monitor LVMforVM1_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 12: monitor LVMforVM2_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 13: monitor vm1_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 14: monitor o2cb:0_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 15: monitor o2cb:1_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 16: monitor ocfs2data:1_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 17: monitor ocfs2-config:1_monitor_0 on xen1
Feb 24 20:44:10 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Feb 24 20:44:10 xen2 crmd: [5537]: info: match_graph_event: Action DRAC-xen1_monitor_0 (6) confirmed on xen1 (rc=0)
Feb 24 20:44:10 xen2 pengine: [5536]: WARN: process_pe_message: Transition 33: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-36.bz2
Feb 24 20:44:10 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:44:10 xen2 crmd: [5537]: info: match_graph_event: Action DRAC-xen2_monitor_0 (7) confirmed on xen1 (rc=0)
Feb 24 20:44:10 xen2 crmd: [5537]: info: match_graph_event: Action sbd:1_monitor_0 (8) confirmed on xen1 (rc=0)
Feb 24 20:44:10 xen2 crmd: [5537]: info: match_graph_event: Action dlm:1_monitor_0 (9) confirmed on xen1 (rc=0)
Feb 24 20:44:10 xen2 crmd: [5537]: info: match_graph_event: Action LVMforVM2_monitor_0 (12) confirmed on xen1 (rc=0)
Feb 24 20:44:11 xen2 crmd: [5537]: info: match_graph_event: Action o2cb:0_monitor_0 (14) confirmed on xen1 (rc=0)
Feb 24 20:44:11 xen2 crmd: [5537]: info: match_graph_event: Action clvm:1_monitor_0 (10) confirmed on xen1 (rc=0)
Feb 24 20:44:11 xen2 crmd: [5537]: info: match_graph_event: Action LVMforVM1_monitor_0 (11) confirmed on xen1 (rc=0)
Feb 24 20:44:11 xen2 crmd: [5537]: info: match_graph_event: Action vm1_monitor_0 (13) confirmed on xen1 (rc=0)
Feb 24 20:44:12 xen2 crmd: [5537]: info: match_graph_event: Action o2cb:1_monitor_0 (15) confirmed on xen1 (rc=0)
Feb 24 20:44:12 xen2 crmd: [5537]: info: match_graph_event: Action ocfs2-config:1_monitor_0 (17) confirmed on xen1 (rc=0)
Feb 24 20:44:12 xen2 crmd: [5537]: info: match_graph_event: Action ocfs2data:1_monitor_0 (16) confirmed on xen1 (rc=0)
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 5: probe_complete probe_complete on xen1 - no waiting
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 3 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 38 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 39 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 52 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 60 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 68 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 30 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 31 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 50 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 58 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 66 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 22 fired and confirmed
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 46: start o2cb:0_start_0 on xen2 (local)
Feb 24 20:44:12 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=46:33:0:e6c42e56-088a-4674-b420-201efc520279 op=o2cb:0_start_0 )
Feb 24 20:44:12 xen2 lrmd: [5534]: info: rsc:o2cb:0:15: start
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 48: start o2cb:1_start_0 on xen1
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 54: start ocfs2data:0_start_0 on xen2 (local)
Feb 24 20:44:12 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=54:33:0:e6c42e56-088a-4674-b420-201efc520279 op=ocfs2data:0_start_0 )
Feb 24 20:44:12 xen2 lrmd: [5534]: info: rsc:ocfs2data:0:16: start
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 56: start ocfs2data:1_start_0 on xen1
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 62: start ocfs2-config:0_start_0 on xen2 (local)
Feb 24 20:44:12 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=62:33:0:e6c42e56-088a-4674-b420-201efc520279 op=ocfs2-config:0_start_0 )
Feb 24 20:44:12 xen2 lrmd: [5534]: info: rsc:ocfs2-config:0:17: start
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 64: start ocfs2-config:1_start_0 on xen1
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 1: stop sbd:0_stop_0 on xen2 (local)
Feb 24 20:44:12 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=1:33:0:e6c42e56-088a-4674-b420-201efc520279 op=sbd:0_stop_0 )
Feb 24 20:44:12 xen2 lrmd: [5534]: info: rsc:sbd:0:18: stop
Feb 24 20:44:12 xen2 lrmd: [5924]: info: Try to stop STONITH resource <rsc_id=sbd:0> : Device=external/sbd
Feb 24 20:44:12 xen2 stonithd: [5532]: notice: try to stop a resource sbd:0 who is not in started resource queue.
Feb 24 20:44:12 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.18) : Transient attribute: update
Feb 24 20:44:12 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 24 20:44:12 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (o2cb:0:start:stderr) logd is not running
Feb 24 20:44:12 xen2 o2cb[5920]: INFO: configfs not laoded
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (o2cb:0:start:stderr) 2010/02/24_20:44:12 INFO: configfs not laoded
Feb 24 20:44:12 xen2 crmd: [5537]: info: process_lrm_event: LRM operation sbd:0_stop_0 (call=18, rc=0, cib-update=99, confirmed=true) ok
Feb 24 20:44:12 xen2 crmd: [5537]: info: match_graph_event: Action sbd:0_stop_0 (1) confirmed on xen2 (rc=0)
Feb 24 20:44:12 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 23 fired and confirmed
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (o2cb:0:start:stderr) logd is not running
Feb 24 20:44:12 xen2 o2cb[5920]: INFO: Starting o2cb:0
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (o2cb:0:start:stderr) 2010/02/24_20:44:12 INFO: Starting o2cb:0
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (ocfs2-config:0:start:stderr) logd is not running
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (ocfs2data:0:start:stderr) logd is not running
Feb 24 20:44:12 xen2 Filesystem[5922]: INFO: Running start for /dev/disk/by-id/scsi-149455400000000000000000002000000240600000f000000 on /etc/xen/vm
Feb 24 20:44:12 xen2 Filesystem[5921]: INFO: Running start for /dev/disk/by-id/scsi-149455400000000000000000002000000140600000f000000 on /var/lib/xen/images
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (ocfs2-config:0:start:stderr) 2010/02/24_20:44:12 INFO: Running start for /dev/disk/by-id/scsi-149455400000000000000000002000000240600000f000000 on /etc/xen/vm
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (ocfs2data:0:start:stderr) 2010/02/24_20:44:12 INFO: Running start for /dev/disk/by-id/scsi-149455400000000000000000002000000140600000f000000 on /var/lib/xen/images
Feb 24 20:44:12 xen2 crmd: [5537]: WARN: status_from_rc: Action 56 (ocfs2data:1_start_0) on xen1 failed (target: 0 vs. rc: 5): Error
Feb 24 20:44:12 xen2 crmd: [5537]: WARN: update_failcount: Updating failcount for ocfs2data:1 on xen1 after failed start: rc=5 (update=INFINITY, time=1267040652)
Feb 24 20:44:12 xen2 crmd: [5537]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=ocfs2data:1_start_0, magic=0:5;56:33:0:e6c42e56-088a-4674-b420-201efc520279, cib=0.88.20) : Event failed
Feb 24 20:44:12 xen2 crmd: [5537]: info: match_graph_event: Action ocfs2data:1_start_0 (56) confirmed on xen1 (rc=4)
Feb 24 20:44:12 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for fail-count-ocfs2data:1
Feb 24 20:44:12 xen2 crmd: [5537]: WARN: status_from_rc: Action 64 (ocfs2-config:1_start_0) on xen1 failed (target: 0 vs. rc: 5): Error
Feb 24 20:44:12 xen2 crmd: [5537]: WARN: update_failcount: Updating failcount for ocfs2-config:1 on xen1 after failed start: rc=5 (update=INFINITY, time=1267040652)
Feb 24 20:44:12 xen2 crmd: [5537]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=ocfs2-config:1_start_0, magic=0:5;64:33:0:e6c42e56-088a-4674-b420-201efc520279, cib=0.88.21) : Event failed
Feb 24 20:44:12 xen2 crmd: [5537]: info: match_graph_event: Action ocfs2-config:1_start_0 (64) confirmed on xen1 (rc=4)
Feb 24 20:44:12 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.22) : Transient attribute: update
Feb 24 20:44:12 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.23) : Transient attribute: update
Feb 24 20:44:12 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for last-failure-ocfs2data:1
Feb 24 20:44:12 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for fail-count-ocfs2-config:1
Feb 24 20:44:12 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.24) : Transient attribute: update
Feb 24 20:44:12 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.25) : Transient attribute: update
Feb 24 20:44:12 xen2 crmd: [5537]: WARN: status_from_rc: Action 48 (o2cb:1_start_0) on xen1 failed (target: 0 vs. rc: 1): Error
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (ocfs2-config:0:start:stderr) logd is not running
Feb 24 20:44:12 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for last-failure-ocfs2-config:1
Feb 24 20:44:12 xen2 Filesystem[5922]: ERROR: Couldn't find device [/dev/disk/by-id/scsi-149455400000000000000000002000000240600000f000000]. Expected /dev/??? to exist
Feb 24 20:44:12 xen2 Filesystem[5921]: ERROR: Couldn't find device [/dev/disk/by-id/scsi-149455400000000000000000002000000140600000f000000]. Expected /dev/??? to exist
Feb 24 20:44:12 xen2 crmd: [5537]: WARN: update_failcount: Updating failcount for o2cb:1 on xen1 after failed start: rc=1 (update=INFINITY, time=1267040652)
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (ocfs2data:0:start:stderr) logd is not running
Feb 24 20:44:12 xen2 crmd: [5537]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=o2cb:1_start_0, magic=0:1;48:33:0:e6c42e56-088a-4674-b420-201efc520279, cib=0.88.26) : Event failed
Feb 24 20:44:12 xen2 crmd: [5537]: info: match_graph_event: Action o2cb:1_start_0 (48) confirmed on xen1 (rc=4)
Feb 24 20:44:12 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for fail-count-o2cb:1
Feb 24 20:44:12 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.27) : Transient attribute: update
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (ocfs2data:0:start:stderr) 2010/02/24_20:44:12 ERROR: Couldn't find device [/dev/disk/by-id/scsi-149455400000000000000000002000000140600000f000000]. Expected /dev/??? to exist
Feb 24 20:44:12 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for last-failure-o2cb:1
Feb 24 20:44:12 xen2 ocfs2_controld.pcmk: Core dumps enabled: /var/lib/openais
Feb 24 20:44:12 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.28) : Transient attribute: update
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (o2cb:0:start:stderr) /usr/sbin/ocfs2_controld.pcmk: Unable to access cluster service
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (o2cb:0:start:stderr) while trying to initialize o2cb
Feb 24 20:44:12 xen2 o2cb[5920]: ERROR: Could not start /usr/sbin/ocfs2_controld.pcmk
Feb 24 20:44:12 xen2 lrmd: [5534]: info: RA output: (o2cb:0:start:stderr) logd is not running2010/02/24_20:44:12 ERROR: Could not start /usr/sbin/ocfs2_controld.pcmk
Feb 24 20:44:12 xen2 crmd: [5537]: info: process_lrm_event: LRM operation ocfs2data:0_start_0 (call=16, rc=5, cib-update=100, confirmed=true) not installed
Feb 24 20:44:12 xen2 crmd: [5537]: info: process_lrm_event: LRM operation ocfs2-config:0_start_0 (call=17, rc=5, cib-update=101, confirmed=true) not installed
Feb 24 20:44:13 xen2 crmd: [5537]: info: process_lrm_event: LRM operation o2cb:0_start_0 (call=15, rc=1, cib-update=102, confirmed=true) unknown error
Feb 24 20:44:13 xen2 crmd: [5537]: WARN: status_from_rc: Action 54 (ocfs2data:0_start_0) on xen2 failed (target: 0 vs. rc: 5): Error
Feb 24 20:44:13 xen2 crmd: [5537]: WARN: update_failcount: Updating failcount for ocfs2data:0 on xen2 after failed start: rc=5 (update=INFINITY, time=1267040653)
Feb 24 20:44:13 xen2 crmd: [5537]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=ocfs2data:0_start_0, magic=0:5;54:33:0:e6c42e56-088a-4674-b420-201efc520279, cib=0.88.29) : Event failed
Feb 24 20:44:13 xen2 crmd: [5537]: info: match_graph_event: Action ocfs2data:0_start_0 (54) confirmed on xen2 (rc=4)
Feb 24 20:44:13 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for fail-count-ocfs2data:0
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-ocfs2data:0 (INFINITY)
Feb 24 20:44:13 xen2 crmd: [5537]: WARN: status_from_rc: Action 62 (ocfs2-config:0_start_0) on xen2 failed (target: 0 vs. rc: 5): Error
Feb 24 20:44:13 xen2 crmd: [5537]: WARN: update_failcount: Updating failcount for ocfs2-config:0 on xen2 after failed start: rc=5 (update=INFINITY, time=1267040653)
Feb 24 20:44:13 xen2 crmd: [5537]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=ocfs2-config:0_start_0, magic=0:5;62:33:0:e6c42e56-088a-4674-b420-201efc520279, cib=0.88.30) : Event failed
Feb 24 20:44:13 xen2 crmd: [5537]: info: match_graph_event: Action ocfs2-config:0_start_0 (62) confirmed on xen2 (rc=4)
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_perform_update: Sent update 45: fail-count-ocfs2data:0=INFINITY
Feb 24 20:44:13 xen2 crmd: [5537]: WARN: status_from_rc: Action 46 (o2cb:0_start_0) on xen2 failed (target: 0 vs. rc: 1): Error
Feb 24 20:44:13 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for last-failure-ocfs2data:0
Feb 24 20:44:13 xen2 crmd: [5537]: WARN: update_failcount: Updating failcount for o2cb:0 on xen2 after failed start: rc=1 (update=INFINITY, time=1267040653)
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-ocfs2data:0 (1267040653)
Feb 24 20:44:13 xen2 crmd: [5537]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=o2cb:0_start_0, magic=0:1;46:33:0:e6c42e56-088a-4674-b420-201efc520279, cib=0.88.31) : Event failed
Feb 24 20:44:13 xen2 crmd: [5537]: info: match_graph_event: Action o2cb:0_start_0 (46) confirmed on xen2 (rc=4)
Feb 24 20:44:13 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen2, magic=NA, cib=0.88.32) : Transient attribute: update
Feb 24 20:44:13 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 51 fired and confirmed
Feb 24 20:44:13 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 59 fired and confirmed
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_perform_update: Sent update 48: last-failure-ocfs2data:0=1267040653
Feb 24 20:44:13 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 67 fired and confirmed
Feb 24 20:44:13 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for fail-count-ocfs2-config:0
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-ocfs2-config:0 (INFINITY)
Feb 24 20:44:13 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen2, magic=NA, cib=0.88.33) : Transient attribute: update
Feb 24 20:44:13 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:44:13 xen2 crmd: [5537]: notice: run_graph: Transition 33 (Complete=37, Pending=0, Fired=0, Skipped=25, Incomplete=3, Source=/var/lib/pengine/pe-warn-36.bz2): Stopped
Feb 24 20:44:13 xen2 crmd: [5537]: info: te_graph_trigger: Transition 33 is now complete
Feb 24 20:44:13 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:44:13 xen2 crmd: [5537]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_perform_update: Sent update 51: fail-count-ocfs2-config:0=INFINITY
Feb 24 20:44:13 xen2 crmd: [5537]: info: do_pe_invoke: Query 103: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:13 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for last-failure-ocfs2-config:0
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-ocfs2-config:0 (1267040653)
Feb 24 20:44:13 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=xen2, magic=NA, cib=0.88.34) : Transient attribute: update
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_perform_update: Sent update 54: last-failure-ocfs2-config:0=1267040653
Feb 24 20:44:13 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for fail-count-o2cb:0
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-o2cb:0 (INFINITY)
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_perform_update: Sent update 57: fail-count-o2cb:0=INFINITY
Feb 24 20:44:13 xen2 attrd: [5535]: info: find_hash_entry: Creating hash entry for last-failure-o2cb:0
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-o2cb:0 (1267040653)
Feb 24 20:44:13 xen2 attrd: [5535]: info: attrd_perform_update: Sent update 60: last-failure-o2cb:0=1267040653
Feb 24 20:44:13 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040653-86, seq=128, quorate=1
Feb 24 20:44:13 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=xen2, magic=NA, cib=0.88.35) : Transient attribute: update
Feb 24 20:44:13 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=xen2, magic=NA, cib=0.88.36) : Transient attribute: update
Feb 24 20:44:13 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=xen2, magic=NA, cib=0.88.37) : Transient attribute: update
Feb 24 20:44:13 xen2 crmd: [5537]: info: do_pe_invoke: Query 104: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:13 xen2 crmd: [5537]: info: do_pe_invoke: Query 105: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:13 xen2 crmd: [5537]: info: do_pe_invoke: Query 106: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:13 xen2 crmd: [5537]: info: do_pe_invoke: Query 107: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:13 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:44:13 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:44:13 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:44:13 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:44:13 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:0_start_0 on xen2: unknown error
Feb 24 20:44:13 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2data:0_start_0 on xen2 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:13 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2data:0_start_0 failed with rc=5: Preventing ocfs2-data from re-starting on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2data:0_start_0 on xen2: not installed
Feb 24 20:44:13 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2-config:0_start_0 on xen2 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:13 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2-config:0_start_0 failed with rc=5: Preventing ocfs2-config-clone from re-starting on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2-config:0_start_0 on xen2: not installed
Feb 24 20:44:13 xen2 pengine: [5536]: info: determine_online_status: Node xen1 is online
Feb 24 20:44:13 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:1_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:1_start_0 on xen1: unknown error
Feb 24 20:44:13 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2-config:1_start_0 on xen1 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:13 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2-config:1_start_0 failed with rc=5: Preventing ocfs2-config-clone from re-starting on xen1
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2-config:1_start_0 on xen1: not installed
Feb 24 20:44:13 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2data:1_start_0 on xen1 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:13 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2data:1_start_0 failed with rc=5: Preventing ocfs2-data from re-starting on xen1
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2data:1_start_0 on xen1: not installed
Feb 24 20:44:13 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040653-87, seq=128, quorate=1
Feb 24 20:44:13 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:44:13 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:44:13 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:0 sbd:1 ]
Feb 24 20:44:13 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:44:13 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:44:13 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:44:13 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:44:13 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Started xen2 FAILED
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Started xen1 FAILED
Feb 24 20:44:13 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: ocfs2data:0 (ocf::heartbeat:Filesystem): Started xen2 FAILED
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: ocfs2data:1 (ocf::heartbeat:Filesystem): Started xen1 FAILED
Feb 24 20:44:13 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: ocfs2-config:0 (ocf::heartbeat:Filesystem): Started xen2 FAILED
Feb 24 20:44:13 xen2 pengine: [5536]: notice: native_print: ocfs2-config:1 (ocf::heartbeat:Filesystem): Started xen1 FAILED
Feb 24 20:44:13 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:13 xen2 pengine: [5536]: info: get_failcount: ocfs2-data has failed 1000000 times on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-data away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:13 xen2 pengine: [5536]: info: get_failcount: ocfs2-config-clone has failed 1000000 times on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-config-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:13 xen2 pengine: [5536]: info: get_failcount: o2cb:1 has failed 1000000 times on xen1
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:1 away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:13 xen2 pengine: [5536]: info: get_failcount: ocfs2-data has failed 1000000 times on xen1
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-data away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:13 xen2 pengine: [5536]: info: get_failcount: ocfs2-config-clone has failed 1000000 times on xen1
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-config-clone away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:0 cannot run anywhere
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:0 cannot run anywhere
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:44:13 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (15s) for sbd:0 on xen1
Feb 24 20:44:13 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:1 on xen1
Feb 24 20:44:13 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:1 on xen1
Feb 24 20:44:13 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen1
Feb 24 20:44:13 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:44:13 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:1 with dlm:1 on xen1
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Start sbd:0 (xen1)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Start dlm:1 (xen1)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Start clvm:1 (xen1)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen1)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Recover resource o2cb:0 (Started xen2)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Stop resource o2cb:1 (xen1)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Stop resource ocfs2data:0 (xen2)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Stop resource ocfs2data:1 (xen1)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Stop resource ocfs2-config:0 (xen2)
Feb 24 20:44:13 xen2 pengine: [5536]: notice: LogActions: Stop resource ocfs2-config:1 (xen1)
Feb 24 20:44:13 xen2 crmd: [5537]: info: handle_response: pe_calc calculation pe_calc-dc-1267040653-86 is obsolete
Feb 24 20:44:13 xen2 pengine: [5536]: WARN: process_pe_message: Transition 34: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-37.bz2
Feb 24 20:44:13 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:44:14 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:44:14 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:44:14 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:44:14 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:44:14 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:0_start_0 on xen2: unknown error
Feb 24 20:44:14 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2data:0_start_0 on xen2 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:14 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2data:0_start_0 failed with rc=5: Preventing ocfs2-data from re-starting on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2data:0_start_0 on xen2: not installed
Feb 24 20:44:14 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2-config:0_start_0 on xen2 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:14 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2-config:0_start_0 failed with rc=5: Preventing ocfs2-config-clone from re-starting on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2-config:0_start_0 on xen2: not installed
Feb 24 20:44:14 xen2 pengine: [5536]: info: determine_online_status: Node xen1 is online
Feb 24 20:44:14 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:1_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:1_start_0 on xen1: unknown error
Feb 24 20:44:14 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2-config:1_start_0 on xen1 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:14 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2-config:1_start_0 failed with rc=5: Preventing ocfs2-config-clone from re-starting on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2-config:1_start_0 on xen1: not installed
Feb 24 20:44:14 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2data:1_start_0 on xen1 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:14 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2data:1_start_0 failed with rc=5: Preventing ocfs2-data from re-starting on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2data:1_start_0 on xen1: not installed
Feb 24 20:44:14 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:44:14 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:44:14 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:0 sbd:1 ]
Feb 24 20:44:14 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:44:14 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:44:14 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:44:14 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:44:14 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Started xen2 FAILED
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Started xen1 FAILED
Feb 24 20:44:14 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: ocfs2data:0 (ocf::heartbeat:Filesystem): Started xen2 FAILED
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: ocfs2data:1 (ocf::heartbeat:Filesystem): Started xen1 FAILED
Feb 24 20:44:14 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: ocfs2-config:0 (ocf::heartbeat:Filesystem): Started xen2 FAILED
Feb 24 20:44:14 xen2 pengine: [5536]: notice: native_print: ocfs2-config:1 (ocf::heartbeat:Filesystem): Started xen1 FAILED
Feb 24 20:44:14 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:14 xen2 pengine: [5536]: info: get_failcount: o2cb:0 has failed 1000000 times on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:0 away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:14 xen2 pengine: [5536]: info: get_failcount: ocfs2-data has failed 1000000 times on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-data away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:14 xen2 pengine: [5536]: info: get_failcount: ocfs2-config-clone has failed 1000000 times on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-config-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:14 xen2 pengine: [5536]: info: get_failcount: o2cb:1 has failed 1000000 times on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:1 away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:14 xen2 pengine: [5536]: info: get_failcount: ocfs2-data has failed 1000000 times on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-data away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:14 xen2 pengine: [5536]: info: get_failcount: ocfs2-config-clone has failed 1000000 times on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-config-clone away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:0 cannot run anywhere
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:0 cannot run anywhere
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:44:14 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (15s) for sbd:0 on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:1 on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:1 on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:0 on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:1 on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:44:14 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:1 with dlm:1 on xen1
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Start sbd:0 (xen1)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Start dlm:1 (xen1)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Start clvm:1 (xen1)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen1)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Move resource o2cb:0 (Started xen2 -> xen1)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Move resource o2cb:1 (Started xen1 -> xen2)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Stop resource ocfs2data:0 (xen2)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Stop resource ocfs2data:1 (xen1)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Stop resource ocfs2-config:0 (xen2)
Feb 24 20:44:14 xen2 pengine: [5536]: notice: LogActions: Stop resource ocfs2-config:1 (xen1)
Feb 24 20:44:14 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:44:14 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 35: 41 actions in 41 synapses
Feb 24 20:44:14 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 35 (ref=pe_calc-dc-1267040653-87) derived from /var/lib/pengine/pe-warn-38.bz2
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 13 fired and confirmed
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 49 fired and confirmed
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 53 fired and confirmed
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 11: start sbd:0_start_0 on xen1
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 1: stop o2cb:0_stop_0 on xen2 (local)
Feb 24 20:44:14 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=1:35:0:e6c42e56-088a-4674-b420-201efc520279 op=o2cb:0_stop_0 )
Feb 24 20:44:14 xen2 lrmd: [5534]: info: rsc:o2cb:0:19: stop
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 4: stop o2cb:1_stop_0 on xen1
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 2: stop ocfs2data:0_stop_0 on xen2 (local)
Feb 24 20:44:14 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=2:35:0:e6c42e56-088a-4674-b420-201efc520279 op=ocfs2data:0_stop_0 )
Feb 24 20:44:14 xen2 lrmd: [5534]: info: rsc:ocfs2data:0:20: stop
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 6: stop ocfs2data:1_stop_0 on xen1
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 3: stop ocfs2-config:0_stop_0 on xen2 (local)
Feb 24 20:44:14 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=3:35:0:e6c42e56-088a-4674-b420-201efc520279 op=ocfs2-config:0_stop_0 )
Feb 24 20:44:14 xen2 lrmd: [5534]: info: rsc:ocfs2-config:0:21: stop
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 5: stop ocfs2-config:1_stop_0 on xen1
Feb 24 20:44:14 xen2 crmd: [5537]: info: match_graph_event: Action o2cb:1_stop_0 (4) confirmed on xen1 (rc=0)
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (ocfs2-config:0:stop:stderr) logd is not running
Feb 24 20:44:14 xen2 Filesystem[6118]: WARNING: Couldn't find device [/dev/disk/by-id/scsi-149455400000000000000000002000000240600000f000000]. Expected /dev/??? to exist
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (ocfs2data:0:stop:stderr) logd is not running
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (o2cb:0:stop:stderr) logd is not running
Feb 24 20:44:14 xen2 Filesystem[6117]: WARNING: Couldn't find device [/dev/disk/by-id/scsi-149455400000000000000000002000000140600000f000000]. Expected /dev/??? to exist
Feb 24 20:44:14 xen2 o2cb[6116]: INFO: configfs not mounted
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (ocfs2-config:0:stop:stderr) 2010/02/24_20:44:14 WARNING: Couldn't find device [/dev/disk/by-id/scsi-149455400000000000000000002000000240600000f000000]. Expected /dev/??? to exist
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (ocfs2data:0:stop:stderr) 2010/02/24_20:44:14 WARNING: Couldn't find device [/dev/disk/by-id/scsi-149455400000000000000000002000000140600000f000000]. Expected /dev/??? to exist
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (o2cb:0:stop:stderr) 2010/02/24_20:44:14 INFO: configfs not mounted
Feb 24 20:44:14 xen2 crmd: [5537]: info: process_lrm_event: LRM operation o2cb:0_stop_0 (call=19, rc=0, cib-update=108, confirmed=true) ok
Feb 24 20:44:14 xen2 pengine: [5536]: WARN: process_pe_message: Transition 35: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-38.bz2
Feb 24 20:44:14 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:44:14 xen2 crmd: [5537]: info: match_graph_event: Action o2cb:0_stop_0 (1) confirmed on xen2 (rc=0)
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 43 fired and confirmed
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 39: start o2cb:0_start_0 on xen1
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 41: start o2cb:1_start_0 on xen2 (local)
Feb 24 20:44:14 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=41:35:0:e6c42e56-088a-4674-b420-201efc520279 op=o2cb:1_start_0 )
Feb 24 20:44:14 xen2 lrmd: [5534]: info: rsc:o2cb:1:22: start
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (ocfs2-config:0:stop:stderr) logd is not running
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (ocfs2data:0:stop:stderr) logd is not running
Feb 24 20:44:14 xen2 Filesystem[6118]: INFO: Running stop for /dev/disk/by-id/scsi-149455400000000000000000002000000240600000f000000 on /etc/xen/vm
Feb 24 20:44:14 xen2 Filesystem[6117]: INFO: Running stop for /dev/disk/by-id/scsi-149455400000000000000000002000000140600000f000000 on /var/lib/xen/images
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (ocfs2-config:0:stop:stderr) 2010/02/24_20:44:14 INFO: Running stop for /dev/disk/by-id/scsi-149455400000000000000000002000000240600000f000000 on /etc/xen/vm
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (ocfs2data:0:stop:stderr) 2010/02/24_20:44:14 INFO: Running stop for /dev/disk/by-id/scsi-149455400000000000000000002000000140600000f000000 on /var/lib/xen/images
Feb 24 20:44:14 xen2 crmd: [5537]: info: match_graph_event: Action ocfs2data:1_stop_0 (6) confirmed on xen1 (rc=0)
Feb 24 20:44:14 xen2 o2cb[6196]: INFO: configfs not mounted
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (ocfs2data:0:stop:stderr) /dev/disk/by-id/scsi-149455400000000000000000002000000140600000f000000: No such file or directory
Feb 24 20:44:14 xen2 crmd: [5537]: info: match_graph_event: Action ocfs2-config:1_stop_0 (5) confirmed on xen1 (rc=0)
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (ocfs2-config:0:stop:stderr) /dev/disk/by-id/scsi-149455400000000000000000002000000240600000f000000: No such file or directory
Feb 24 20:44:14 xen2 crmd: [5537]: info: process_lrm_event: LRM operation ocfs2data:0_stop_0 (call=20, rc=0, cib-update=109, confirmed=true) ok
Feb 24 20:44:14 xen2 crmd: [5537]: info: process_lrm_event: LRM operation ocfs2-config:0_stop_0 (call=21, rc=0, cib-update=110, confirmed=true) ok
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (o2cb:1:start:stderr) 2010/02/24_20:44:14 INFO: configfs not mounted
Feb 24 20:44:14 xen2 crmd: [5537]: info: match_graph_event: Action ocfs2data:0_stop_0 (2) confirmed on xen2 (rc=0)
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 50 fired and confirmed
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (o2cb:1:start:stderr) logd is not running
Feb 24 20:44:14 xen2 crmd: [5537]: info: match_graph_event: Action ocfs2-config:0_stop_0 (3) confirmed on xen2 (rc=0)
Feb 24 20:44:14 xen2 o2cb[6196]: INFO: Starting o2cb:1
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 54 fired and confirmed
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 7 fired and confirmed
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (o2cb:1:start:stderr) 2010/02/24_20:44:14 INFO: Starting o2cb:1
Feb 24 20:44:14 xen2 crmd: [5537]: WARN: status_from_rc: Action 11 (sbd:0_start_0) on xen1 failed (target: 0 vs. rc: 1): Error
Feb 24 20:44:14 xen2 crmd: [5537]: WARN: update_failcount: Updating failcount for sbd:0 on xen1 after failed start: rc=1 (update=INFINITY, time=1267040654)
Feb 24 20:44:14 xen2 crmd: [5537]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=sbd:0_start_0, magic=0:1;11:35:0:e6c42e56-088a-4674-b420-201efc520279, cib=0.88.44) : Event failed
Feb 24 20:44:14 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Feb 24 20:44:14 xen2 crmd: [5537]: info: update_abort_priority: Abort action done superceeded by restart
Feb 24 20:44:14 xen2 crmd: [5537]: info: match_graph_event: Action sbd:0_start_0 (11) confirmed on xen1 (rc=4)
Feb 24 20:44:14 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Feb 24 20:44:14 xen2 ocfs2_controld.pcmk: Core dumps enabled: /var/lib/openais
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (o2cb:1:start:stderr) /usr/sbin/ocfs2_controld.pcmk: Unable to access cluster service
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (o2cb:1:start:stderr) while trying to initialize o2cb
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (o2cb:1:start:stderr) logd is not running
Feb 24 20:44:14 xen2 o2cb[6196]: ERROR: Could not start /usr/sbin/ocfs2_controld.pcmk
Feb 24 20:44:14 xen2 crmd: [5537]: WARN: status_from_rc: Action 39 (o2cb:0_start_0) on xen1 failed (target: 0 vs. rc: 1): Error
Feb 24 20:44:14 xen2 crmd: [5537]: WARN: update_failcount: Updating failcount for o2cb:0 on xen1 after failed start: rc=1 (update=INFINITY, time=1267040654)
Feb 24 20:44:14 xen2 crmd: [5537]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=o2cb:0_start_0, magic=0:1;39:35:0:e6c42e56-088a-4674-b420-201efc520279, cib=0.88.45) : Event failed
Feb 24 20:44:14 xen2 crmd: [5537]: info: match_graph_event: Action o2cb:0_start_0 (39) confirmed on xen1 (rc=4)
Feb 24 20:44:14 xen2 lrmd: [5534]: info: RA output: (o2cb:1:start:stderr) 2010/02/24_20:44:14 ERROR: Could not start /usr/sbin/ocfs2_controld.pcmk
Feb 24 20:44:14 xen2 crmd: [5537]: info: process_lrm_event: LRM operation o2cb:1_start_0 (call=22, rc=1, cib-update=111, confirmed=true) unknown error
Feb 24 20:44:14 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.46) : Transient attribute: update
Feb 24 20:44:15 xen2 crmd: [5537]: info: update_abort_priority: Abort priority upgraded from 1 to 1000000
Feb 24 20:44:15 xen2 crmd: [5537]: info: update_abort_priority: 'Event failed' abort superceeded
Feb 24 20:44:15 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.47) : Transient attribute: update
Feb 24 20:44:15 xen2 crmd: [5537]: WARN: status_from_rc: Action 41 (o2cb:1_start_0) on xen2 failed (target: 0 vs. rc: 1): Error
Feb 24 20:44:15 xen2 crmd: [5537]: WARN: update_failcount: Updating failcount for o2cb:1 on xen2 after failed start: rc=1 (update=INFINITY, time=1267040655)
Feb 24 20:44:15 xen2 crmd: [5537]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=o2cb:1_start_0, magic=0:1;41:35:0:e6c42e56-088a-4674-b420-201efc520279, cib=0.88.48) : Event failed
Feb 24 20:44:15 xen2 crmd: [5537]: info: match_graph_event: Action o2cb:1_start_0 (41) confirmed on xen2 (rc=4)
Feb 24 20:44:15 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
Feb 24 20:44:15 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:44:15 xen2 crmd: [5537]: notice: run_graph: Transition 35 (Complete=20, Pending=0, Fired=0, Skipped=19, Incomplete=2, Source=/var/lib/pengine/pe-warn-38.bz2): Stopped
Feb 24 20:44:15 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-o2cb:1 (INFINITY)
Feb 24 20:44:15 xen2 crmd: [5537]: info: te_graph_trigger: Transition 35 is now complete
Feb 24 20:44:15 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:44:15 xen2 crmd: [5537]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Feb 24 20:44:15 xen2 crmd: [5537]: info: do_pe_invoke: Query 112: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:15 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.49) : Transient attribute: update
Feb 24 20:44:15 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=xen1, magic=NA, cib=0.88.50) : Transient attribute: update
Feb 24 20:44:15 xen2 crmd: [5537]: info: do_pe_invoke: Query 113: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:15 xen2 crmd: [5537]: info: do_pe_invoke: Query 114: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:15 xen2 attrd: [5535]: info: attrd_perform_update: Sent update 67: fail-count-o2cb:1=INFINITY
Feb 24 20:44:15 xen2 attrd: [5535]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-o2cb:1 (1267040655)
Feb 24 20:44:15 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040655-97, seq=128, quorate=1
Feb 24 20:44:15 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=xen2, magic=NA, cib=0.88.51) : Transient attribute: update
Feb 24 20:44:15 xen2 crmd: [5537]: info: do_pe_invoke: Query 115: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:15 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:44:15 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:0_start_0 on xen2: unknown error
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:1_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:1_start_0 on xen2: unknown error
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2data:0_start_0 on xen2 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:15 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2data:0_start_0 failed with rc=5: Preventing ocfs2-data from re-starting on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2data:0_start_0 on xen2: not installed
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2-config:0_start_0 on xen2 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:15 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2-config:0_start_0 failed with rc=5: Preventing ocfs2-config-clone from re-starting on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2-config:0_start_0 on xen2: not installed
Feb 24 20:44:15 xen2 pengine: [5536]: info: determine_online_status: Node xen1 is online
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:0_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:0_start_0 on xen1: unknown error
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:1_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:1_start_0 on xen1: unknown error
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2-config:1_start_0 on xen1 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:15 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2-config:1_start_0 failed with rc=5: Preventing ocfs2-config-clone from re-starting on xen1
Feb 24 20:44:15 xen2 attrd: [5535]: info: attrd_perform_update: Sent update 70: last-failure-o2cb:1=1267040655
Feb 24 20:44:15 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040655-98, seq=128, quorate=1
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2-config:1_start_0 on xen1: not installed
Feb 24 20:44:15 xen2 crmd: [5537]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=xen2, magic=NA, cib=0.88.52) : Transient attribute: update
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2data:1_start_0 on xen1 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:15 xen2 crmd: [5537]: info: do_pe_invoke: Query 116: Requesting the current CIB: S_POLICY_ENGINE
Feb 24 20:44:15 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2data:1_start_0 failed with rc=5: Preventing ocfs2-data from re-starting on xen1
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2data:1_start_0 on xen1: not installed
Feb 24 20:44:15 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen1: unknown error
Feb 24 20:44:15 xen2 crmd: [5537]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1267040655-99, seq=128, quorate=1
Feb 24 20:44:15 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:44:15 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:44:15 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:44:15 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:44:15 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen1 FAILED
Feb 24 20:44:15 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:44:15 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:44:15 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:44:15 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:44:15 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:44:15 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:15 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:15 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:44:15 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:44:15 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Started xen1 FAILED
Feb 24 20:44:15 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Started xen2 FAILED
Feb 24 20:44:15 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:44:15 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:44:15 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:44:15 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:44:15 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:15 xen2 pengine: [5536]: info: get_failcount: o2cb:0 has failed 1000000 times on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:0 away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:15 xen2 pengine: [5536]: info: get_failcount: ocfs2-data has failed 1000000 times on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-data away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:15 xen2 pengine: [5536]: info: get_failcount: ocfs2-config-clone has failed 1000000 times on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-config-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:15 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen1
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:15 xen2 pengine: [5536]: info: get_failcount: o2cb:0 has failed 1000000 times on xen1
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:0 away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:15 xen2 pengine: [5536]: info: get_failcount: o2cb:1 has failed 1000000 times on xen1
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:1 away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:15 xen2 pengine: [5536]: info: get_failcount: ocfs2-data has failed 1000000 times on xen1
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-data away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:15 xen2 pengine: [5536]: info: get_failcount: ocfs2-config-clone has failed 1000000 times on xen1
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-config-clone away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:0 cannot run anywhere
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:0 cannot run anywhere
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:0 cannot run anywhere
Feb 24 20:44:15 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:44:15 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:1 on xen1
Feb 24 20:44:15 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:1 on xen1
Feb 24 20:44:15 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen1
Feb 24 20:44:15 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (120s) for o2cb:1 on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:44:15 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:1 with dlm:1 on xen1
Feb 24 20:44:15 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:44:15 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen1)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Start dlm:1 (xen1)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Start clvm:1 (xen1)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen1)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Stop resource o2cb:0 (xen1)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Recover resource o2cb:1 (Started xen2)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:0 (Stopped)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:0 (Stopped)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:44:16 xen2 crmd: [5537]: info: handle_response: pe_calc calculation pe_calc-dc-1267040655-97 is obsolete
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: process_pe_message: Transition 36: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-39.bz2
Feb 24 20:44:16 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:44:16 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:44:16 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:0_start_0 on xen2: unknown error
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:1_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:1_start_0 on xen2: unknown error
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2data:0_start_0 on xen2 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:16 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2data:0_start_0 failed with rc=5: Preventing ocfs2-data from re-starting on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2data:0_start_0 on xen2: not installed
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2-config:0_start_0 on xen2 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:16 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2-config:0_start_0 failed with rc=5: Preventing ocfs2-config-clone from re-starting on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2-config:0_start_0 on xen2: not installed
Feb 24 20:44:16 xen2 pengine: [5536]: info: determine_online_status: Node xen1 is online
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:0_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:0_start_0 on xen1: unknown error
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:1_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:1_start_0 on xen1: unknown error
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2-config:1_start_0 on xen1 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:16 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2-config:1_start_0 failed with rc=5: Preventing ocfs2-config-clone from re-starting on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2-config:1_start_0 on xen1: not installed
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2data:1_start_0 on xen1 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:16 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2data:1_start_0 failed with rc=5: Preventing ocfs2-data from re-starting on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2data:1_start_0 on xen1: not installed
Feb 24 20:44:16 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen1: unknown error
Feb 24 20:44:16 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:44:16 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:44:16 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:44:16 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen1 FAILED
Feb 24 20:44:16 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:44:16 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:44:16 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:44:16 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:44:16 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:44:16 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:16 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:16 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:44:16 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Started xen1 FAILED
Feb 24 20:44:16 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Started xen2 FAILED
Feb 24 20:44:16 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:44:16 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:44:16 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:44:16 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:44:16 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:16 xen2 pengine: [5536]: info: get_failcount: o2cb:0 has failed 1000000 times on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:0 away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:16 xen2 pengine: [5536]: info: get_failcount: o2cb:1 has failed 1000000 times on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:1 away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:16 xen2 pengine: [5536]: info: get_failcount: ocfs2-data has failed 1000000 times on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-data away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:16 xen2 pengine: [5536]: info: get_failcount: ocfs2-config-clone has failed 1000000 times on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-config-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:16 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:16 xen2 pengine: [5536]: info: get_failcount: o2cb:0 has failed 1000000 times on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:0 away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:16 xen2 pengine: [5536]: info: get_failcount: o2cb:1 has failed 1000000 times on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:1 away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:16 xen2 pengine: [5536]: info: get_failcount: ocfs2-data has failed 1000000 times on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-data away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:16 xen2 pengine: [5536]: info: get_failcount: ocfs2-config-clone has failed 1000000 times on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-config-clone away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:0 cannot run anywhere
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:0 cannot run anywhere
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:0 cannot run anywhere
Feb 24 20:44:16 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:44:16 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:1 on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:1 on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:44:16 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:1 with dlm:1 on xen1
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:44:16 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen1)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Start dlm:1 (xen1)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Start clvm:1 (xen1)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen1)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Stop resource o2cb:0 (xen1)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Stop resource o2cb:1 (xen2)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:0 (Stopped)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:0 (Stopped)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:44:17 xen2 crmd: [5537]: info: handle_response: pe_calc calculation pe_calc-dc-1267040655-98 is obsolete
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: process_pe_message: Transition 37: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-40.bz2
Feb 24 20:44:17 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:44:17 xen2 pengine: [5536]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 24 20:44:17 xen2 pengine: [5536]: info: determine_online_status: Node xen2 is online
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen2: unknown error
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:0_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:0_start_0 on xen2: unknown error
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:1_start_0 on xen2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:1_start_0 on xen2: unknown error
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2data:0_start_0 on xen2 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:17 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2data:0_start_0 failed with rc=5: Preventing ocfs2-data from re-starting on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2data:0_start_0 on xen2: not installed
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2-config:0_start_0 on xen2 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:17 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2-config:0_start_0 failed with rc=5: Preventing ocfs2-config-clone from re-starting on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2-config:0_start_0 on xen2: not installed
Feb 24 20:44:17 xen2 pengine: [5536]: info: determine_online_status: Node xen1 is online
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:0_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:0_start_0 on xen1: unknown error
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_rsc_op: o2cb:1_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op o2cb:1_start_0 on xen1: unknown error
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2-config:1_start_0 on xen1 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:17 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2-config:1_start_0 failed with rc=5: Preventing ocfs2-config-clone from re-starting on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2-config:1_start_0 on xen1: not installed
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_rsc_op: ocfs2data:1_start_0 on xen1 returned 5 (not installed) instead of the expected value: 0 (ok)
Feb 24 20:44:17 xen2 pengine: [5536]: ERROR: unpack_rsc_op: Hard error - ocfs2data:1_start_0 failed with rc=5: Preventing ocfs2-data from re-starting on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op ocfs2data:1_start_0 on xen1: not installed
Feb 24 20:44:17 xen2 pengine: [5536]: info: unpack_rsc_op: sbd:0_start_0 on xen1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: unpack_rsc_op: Processing failed op sbd:0_start_0 on xen1: unknown error
Feb 24 20:44:17 xen2 pengine: [5536]: ERROR: unpack_simple_rsc_order: Constraint DlmBeforeO2cb: no resource found for LHS (dlm)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: native_print: DRAC-xen1 (stonith:external/drac5): Stopped
Feb 24 20:44:17 xen2 pengine: [5536]: notice: native_print: DRAC-xen2 (stonith:external/drac5): Stopped
Feb 24 20:44:17 xen2 pengine: [5536]: notice: clone_print: Clone Set: sbd-clone
Feb 24 20:44:17 xen2 pengine: [5536]: notice: native_print: sbd:0 (stonith:external/sbd): Started xen1 FAILED
Feb 24 20:44:17 xen2 pengine: [5536]: notice: print_list: Stopped: [ sbd:1 ]
Feb 24 20:44:17 xen2 pengine: [5536]: notice: clone_print: Clone Set: dlm-clone
Feb 24 20:44:17 xen2 pengine: [5536]: notice: print_list: Stopped: [ dlm:0 dlm:1 ]
Feb 24 20:44:17 xen2 pengine: [5536]: notice: clone_print: Clone Set: clvm-clone
Feb 24 20:44:17 xen2 pengine: [5536]: notice: print_list: Stopped: [ clvm:0 clvm:1 ]
Feb 24 20:44:17 xen2 pengine: [5536]: notice: native_print: LVMforVM1 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:17 xen2 pengine: [5536]: notice: native_print: LVMforVM2 (ocf::heartbeat:LVM): Stopped
Feb 24 20:44:17 xen2 pengine: [5536]: notice: native_print: vm1 (ocf::heartbeat:Xen): Stopped
Feb 24 20:44:17 xen2 pengine: [5536]: notice: clone_print: Clone Set: o2cb-clone (unique)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: native_print: o2cb:0 (ocf::ocfs2:o2cb): Started xen1 FAILED
Feb 24 20:44:17 xen2 pengine: [5536]: notice: native_print: o2cb:1 (ocf::ocfs2:o2cb): Started xen2 FAILED
Feb 24 20:44:17 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-data
Feb 24 20:44:17 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2data:0 ocfs2data:1 ]
Feb 24 20:44:17 xen2 pengine: [5536]: notice: clone_print: Clone Set: ocfs2-config-clone
Feb 24 20:44:17 xen2 pengine: [5536]: notice: print_list: Stopped: [ ocfs2-config:0 ocfs2-config:1 ]
Feb 24 20:44:17 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:17 xen2 pengine: [5536]: info: get_failcount: o2cb:0 has failed 1000000 times on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:0 away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:17 xen2 pengine: [5536]: info: get_failcount: o2cb:1 has failed 1000000 times on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:1 away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:17 xen2 pengine: [5536]: info: get_failcount: ocfs2-data has failed 1000000 times on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-data away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:17 xen2 pengine: [5536]: info: get_failcount: ocfs2-config-clone has failed 1000000 times on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-config-clone away from xen2 after 1000000 failures (max=1000000)
Feb 24 20:44:17 xen2 pengine: [5536]: info: get_failcount: sbd-clone has failed 1000000 times on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing sbd-clone away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:17 xen2 pengine: [5536]: info: get_failcount: o2cb:0 has failed 1000000 times on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:0 away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:17 xen2 pengine: [5536]: info: get_failcount: o2cb:1 has failed 1000000 times on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing o2cb:1 away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:17 xen2 pengine: [5536]: info: get_failcount: ocfs2-data has failed 1000000 times on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-data away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:17 xen2 pengine: [5536]: info: get_failcount: ocfs2-config-clone has failed 1000000 times on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: common_apply_stickiness: Forcing ocfs2-config-clone away from xen1 after 1000000 failures (max=1000000)
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen1 cannot run anywhere
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: native_color: Resource DRAC-xen2 cannot run anywhere
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: native_color: Resource sbd:1 cannot run anywhere
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: native_color: Resource sbd:0 cannot run anywhere
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:0 cannot run anywhere
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: native_color: Resource o2cb:1 cannot run anywhere
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:0 cannot run anywhere
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2data:1 cannot run anywhere
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:0 cannot run anywhere
Feb 24 20:44:17 xen2 pengine: [5536]: WARN: native_color: Resource ocfs2-config:1 cannot run anywhere
Feb 24 20:44:17 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:0 on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for dlm:1 on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:0 on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for clvm:1 on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM1 on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for LVMforVM2 on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: notice: RecurringOp: Start recurring monitor (10s) for vm1 on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:0 with dlm:0 on xen2
Feb 24 20:44:17 xen2 pengine: [5536]: info: find_compatible_child: Colocating clvm:1 with dlm:1 on xen1
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen1 (Stopped)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Leave resource DRAC-xen2 (Stopped)
Feb 24 20:44:17 xen2 pengine: [5536]: notice: LogActions: Stop resource sbd:0 (xen1)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Leave resource sbd:1 (Stopped)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Start dlm:0 (xen2)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Start dlm:1 (xen1)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Start clvm:0 (xen2)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Start clvm:1 (xen1)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM1 (xen2)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Start LVMforVM2 (xen1)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Start vm1 (xen2)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Stop resource o2cb:0 (xen1)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Stop resource o2cb:1 (xen2)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:0 (Stopped)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2data:1 (Stopped)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:0 (Stopped)
Feb 24 20:44:18 xen2 pengine: [5536]: notice: LogActions: Leave resource ocfs2-config:1 (Stopped)
Feb 24 20:44:18 xen2 crmd: [5537]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 24 20:44:18 xen2 crmd: [5537]: info: unpack_graph: Unpacked transition 38: 27 actions in 27 synapses
Feb 24 20:44:18 xen2 crmd: [5537]: info: do_te_invoke: Processing graph 38 (ref=pe_calc-dc-1267040655-99) derived from /var/lib/pengine/pe-warn-41.bz2
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 19 fired and confirmed
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 36 fired and confirmed
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 10 fired and confirmed
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 16 fired and confirmed
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 2: stop o2cb:0_stop_0 on xen1
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 1: stop o2cb:1_stop_0 on xen2 (local)
Feb 24 20:44:18 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=1:38:0:e6c42e56-088a-4674-b420-201efc520279 op=o2cb:1_stop_0 )
Feb 24 20:44:18 xen2 lrmd: [5534]: info: rsc:o2cb:1:23: stop
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 12: start dlm:0_start_0 on xen2 (local)
Feb 24 20:44:18 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=12:38:0:e6c42e56-088a-4674-b420-201efc520279 op=dlm:0_start_0 )
Feb 24 20:44:18 xen2 lrmd: [5534]: info: rsc:dlm:0:24: start
Feb 24 20:44:18 xen2 crmd: [5537]: info: match_graph_event: Action o2cb:0_stop_0 (2) confirmed on xen1 (rc=0)
Feb 24 20:44:18 xen2 lrmd: [5534]: info: RA output: (dlm:0:start:stderr) dlm_controld.pcmk: no process killed
Feb 24 20:44:18 xen2 lrmd: [5534]: info: RA output: (o2cb:1:stop:stderr) logd is not running
Feb 24 20:44:18 xen2 o2cb[6273]: INFO: configfs not mounted
Feb 24 20:44:18 xen2 pengine: [5536]: WARN: process_pe_message: Transition 38: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-41.bz2
Feb 24 20:44:18 xen2 pengine: [5536]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
Feb 24 20:44:18 xen2 lrmd: [5534]: info: RA output: (o2cb:1:stop:stderr) 2010/02/24_20:44:18 INFO: configfs not mounted
Feb 24 20:44:18 xen2 crmd: [5537]: info: process_lrm_event: LRM operation o2cb:1_stop_0 (call=23, rc=0, cib-update=117, confirmed=true) ok
Feb 24 20:44:18 xen2 crmd: [5537]: info: match_graph_event: Action o2cb:1_stop_0 (1) confirmed on xen2 (rc=0)
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 37 fired and confirmed
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 4 fired and confirmed
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 3: stop sbd:0_stop_0 on xen1
Feb 24 20:44:18 xen2 crmd: [5537]: info: match_graph_event: Action sbd:0_stop_0 (3) confirmed on xen1 (rc=0)
Feb 24 20:44:18 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Feb 24 20:44:18 xen2 cluster-dlm[6302]: main: dlm_controld 1254353244
Feb 24 20:44:18 xen2 cluster-dlm[6302]: setup_misc_devices: found /dev/misc/dlm-control minor 58
Feb 24 20:44:18 xen2 cluster-dlm[6302]: setup_misc_devices: found /dev/misc/dlm-monitor minor 57
Feb 24 20:44:18 xen2 cluster-dlm[6302]: setup_misc_devices: found /dev/misc/dlm_plock minor 56
Feb 24 20:44:18 xen2 cluster-dlm[6302]: setup_monitor: /dev/misc/dlm-monitor fd 9
Feb 24 20:44:18 xen2 cluster-dlm[6302]: update_comms_nodes: /sys/kernel/config/dlm/cluster/comms: opendir failed: 2
Feb 24 20:44:18 xen2 cluster-dlm[6302]: clear_configfs_spaces: /sys/kernel/config/dlm/cluster/spaces: opendir failed: 2
Feb 24 20:44:18 xen2 openais[5522]: [crm ] info: pcmk_notify: Enabling node notifications for child 6302 (0x7f7618000900)
Feb 24 20:44:18 xen2 cluster-dlm[6302]: setup_cpg: setup_cpg 11
Feb 24 20:44:18 xen2 cluster-dlm[6302]: set_protocol: set_protocol member_count 1 propose daemon 1.1.1 kernel 1.1.1
Feb 24 20:44:18 xen2 cluster-dlm[6302]: receive_protocol: run protocol from nodeid 112
Feb 24 20:44:18 xen2 cluster-dlm[6302]: set_protocol: daemon run 1.1.1 max 1.1.1 kernel run 1.1.1 max 1.1.1
Feb 24 20:44:18 xen2 cluster-dlm[6302]: setup_plocks: plocks 13
Feb 24 20:44:18 xen2 cluster-dlm[6302]: setup_plocks: plock cpg message size: 104 bytes
Feb 24 20:44:18 xen2 cluster-dlm[6302]: update_cluster: Processing membership 128
Feb 24 20:44:18 xen2 cluster-dlm[6302]: dlm_process_node: Adding address ip(192.168.1.110) to configfs for node 110
Feb 24 20:44:18 xen2 cluster-dlm[6302]: add_configfs_node: set_configfs_node 110 192.168.1.110 local 0
Feb 24 20:44:18 xen2 cluster-dlm[6302]: dlm_process_node: Added active node 110: born-on=128, last-seen=128, this-event=128, last-event=0
Feb 24 20:44:18 xen2 cluster-dlm[6302]: dlm_process_node: Adding address ip(192.168.1.112) to configfs for node 112
Feb 24 20:44:18 xen2 cluster-dlm[6302]: add_configfs_node: set_configfs_node 112 192.168.1.112 local 1
Feb 24 20:44:18 xen2 cluster-dlm[6302]: dlm_process_node: Added active node 112: born-on=128, last-seen=128, this-event=128, last-event=0
Feb 24 20:44:19 xen2 crmd: [5537]: info: process_lrm_event: LRM operation dlm:0_start_0 (call=24, rc=0, cib-update=118, confirmed=true) ok
Feb 24 20:44:19 xen2 crmd: [5537]: info: match_graph_event: Action dlm:0_start_0 (12) confirmed on xen2 (rc=0)
Feb 24 20:44:19 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 13: monitor dlm:0_monitor_10000 on xen2 (local)
Feb 24 20:44:19 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=13:38:0:e6c42e56-088a-4674-b420-201efc520279 op=dlm:0_monitor_10000 )
Feb 24 20:44:19 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 14: start dlm:1_start_0 on xen1
Feb 24 20:44:19 xen2 crmd: [5537]: info: process_lrm_event: LRM operation dlm:0_monitor_10000 (call=25, rc=0, cib-update=119, confirmed=false) ok
Feb 24 20:44:19 xen2 crmd: [5537]: info: match_graph_event: Action dlm:0_monitor_10000 (13) confirmed on xen2 (rc=0)
Feb 24 20:44:20 xen2 crmd: [5537]: info: match_graph_event: Action dlm:1_start_0 (14) confirmed on xen1 (rc=0)
Feb 24 20:44:20 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 15: monitor dlm:1_monitor_10000 on xen1
Feb 24 20:44:20 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 17 fired and confirmed
Feb 24 20:44:20 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 24 fired and confirmed
Feb 24 20:44:20 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 20: start clvm:0_start_0 on xen2 (local)
Feb 24 20:44:20 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=20:38:0:e6c42e56-088a-4674-b420-201efc520279 op=clvm:0_start_0 )
Feb 24 20:44:20 xen2 lrmd: [5534]: info: rsc:clvm:0:26: start
Feb 24 20:44:20 xen2 crmd: [5537]: info: match_graph_event: Action dlm:1_monitor_10000 (15) confirmed on xen1 (rc=0)
Feb 24 20:44:20 xen2 lrmd: [5534]: info: RA output: (clvm:0:start:stderr) logd is not running
Feb 24 20:44:20 xen2 clvmd[6322]: INFO: Starting clvm:0
Feb 24 20:44:20 xen2 lrmd: [5534]: info: RA output: (clvm:0:start:stderr) 2010/02/24_20:44:20 INFO: Starting clvm:0
Feb 24 20:44:20 xen2 cluster-dlm[6302]: process_uevent: uevent: add@/kernel/dlm/clvmd
Feb 24 20:44:20 xen2 cluster-dlm[6302]: process_uevent: kernel: add@ clvmd
Feb 24 20:44:20 xen2 cluster-dlm[6302]: process_uevent: uevent: online@/kernel/dlm/clvmd
Feb 24 20:44:20 xen2 cluster-dlm[6302]: process_uevent: kernel: online@ clvmd
Feb 24 20:44:20 xen2 cluster-dlm[6302]: add_change: clvmd add_change cg 1 joined nodeid 112
Feb 24 20:44:20 xen2 cluster-dlm[6302]: add_change: clvmd add_change cg 1 we joined
Feb 24 20:44:20 xen2 cluster-dlm[6302]: add_change: clvmd add_change cg 1 counts member 1 joined 1 remove 0 failed 0
Feb 24 20:44:20 xen2 cluster-dlm[6302]: check_fencing_done: clvmd check_fencing done
Feb 24 20:44:20 xen2 cluster-dlm[6302]: check_quorum_done: clvmd check_quorum disabled
Feb 24 20:44:20 xen2 cluster-dlm[6302]: check_fs_done: clvmd check_fs none registered
Feb 24 20:44:20 xen2 cluster-dlm[6302]: send_info: clvmd send_start cg 1 flags 1 counts 0 1 1 0 0
Feb 24 20:44:20 xen2 cluster-dlm[6302]: receive_start: clvmd receive_start 112:1 len 76
Feb 24 20:44:20 xen2 cluster-dlm[6302]: match_change: clvmd match_change 112:1 matches cg 1
Feb 24 20:44:20 xen2 cluster-dlm[6302]: wait_messages_done: clvmd wait_messages cg 1 got all 1
Feb 24 20:44:20 xen2 cluster-dlm[6302]: start_kernel: clvmd start_kernel cg 1 member_count 1
Feb 24 20:44:20 xen2 cluster-dlm[6302]: do_sysfs: write "1090842362" to "/sys/kernel/dlm/clvmd/id"
Feb 24 20:44:20 xen2 cluster-dlm[6302]: set_configfs_members: set_members mkdir "/sys/kernel/config/dlm/cluster/spaces/clvmd/nodes/112"
Feb 24 20:44:20 xen2 cluster-dlm[6302]: do_sysfs: write "1" to "/sys/kernel/dlm/clvmd/control"
Feb 24 20:44:20 xen2 cluster-dlm[6302]: do_sysfs: write "0" to "/sys/kernel/dlm/clvmd/event_done"
Feb 24 20:44:20 xen2 cluster-dlm[6302]: process_uevent: uevent: add@/devices/virtual/misc/dlm_clvmd
Feb 24 20:44:21 xen2 clvmd: Cluster LVM daemon started - connected to OpenAIS
Feb 24 20:44:23 xen2 crmd: [5537]: info: process_lrm_event: LRM operation clvm:0_start_0 (call=26, rc=0, cib-update=120, confirmed=true) ok
Feb 24 20:44:23 xen2 crmd: [5537]: info: match_graph_event: Action clvm:0_start_0 (20) confirmed on xen2 (rc=0)
Feb 24 20:44:23 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 21: monitor clvm:0_monitor_10000 on xen2 (local)
Feb 24 20:44:23 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=21:38:0:e6c42e56-088a-4674-b420-201efc520279 op=clvm:0_monitor_10000 )
Feb 24 20:44:23 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 22: start clvm:1_start_0 on xen1
Feb 24 20:44:23 xen2 crmd: [5537]: info: process_lrm_event: LRM operation clvm:0_monitor_10000 (call=27, rc=0, cib-update=121, confirmed=false) ok
Feb 24 20:44:23 xen2 crmd: [5537]: info: match_graph_event: Action clvm:0_monitor_10000 (21) confirmed on xen2 (rc=0)
Feb 24 20:44:23 xen2 cluster-dlm[6302]: add_change: clvmd add_change cg 2 joined nodeid 110
Feb 24 20:44:23 xen2 cluster-dlm[6302]: add_change: clvmd add_change cg 2 counts member 2 joined 1 remove 0 failed 0
Feb 24 20:44:23 xen2 cluster-dlm[6302]: stop_kernel: clvmd stop_kernel cg 2
Feb 24 20:44:23 xen2 cluster-dlm[6302]: do_sysfs: write "0" to "/sys/kernel/dlm/clvmd/control"
Feb 24 20:44:23 xen2 cluster-dlm[6302]: check_fencing_done: clvmd check_fencing done
Feb 24 20:44:23 xen2 cluster-dlm[6302]: check_quorum_done: clvmd check_quorum disabled
Feb 24 20:44:23 xen2 cluster-dlm[6302]: check_fs_done: clvmd check_fs none registered
Feb 24 20:44:23 xen2 cluster-dlm[6302]: send_info: clvmd send_start cg 2 flags 2 counts 1 2 1 0 0
Feb 24 20:44:23 xen2 cluster-dlm[6302]: receive_start: clvmd receive_start 110:1 len 80
Feb 24 20:44:23 xen2 cluster-dlm[6302]: match_change: clvmd match_change 110:1 matches cg 2
Feb 24 20:44:23 xen2 cluster-dlm[6302]: wait_messages_done: clvmd wait_messages cg 2 need 1 of 2
Feb 24 20:44:23 xen2 cluster-dlm[6302]: receive_start: clvmd receive_start 112:2 len 80
Feb 24 20:44:23 xen2 cluster-dlm[6302]: match_change: clvmd match_change 112:2 matches cg 2
Feb 24 20:44:23 xen2 cluster-dlm[6302]: wait_messages_done: clvmd wait_messages cg 2 got all 2
Feb 24 20:44:23 xen2 cluster-dlm[6302]: start_kernel: clvmd start_kernel cg 2 member_count 2
Feb 24 20:44:23 xen2 cluster-dlm[6302]: update_dir_members: dir_member 112
Feb 24 20:44:23 xen2 cluster-dlm[6302]: set_configfs_members: set_members mkdir "/sys/kernel/config/dlm/cluster/spaces/clvmd/nodes/110"
Feb 24 20:44:23 xen2 cluster-dlm[6302]: do_sysfs: write "1" to "/sys/kernel/dlm/clvmd/control"
Feb 24 20:44:23 xen2 cluster-dlm[6302]: set_plock_ckpt_node: clvmd set_plock_ckpt_node from 112 to 112
Feb 24 20:44:23 xen2 cluster-dlm[6302]: _unlink_checkpoint: clvmd unlink ckpt 0
Feb 24 20:44:23 xen2 cluster-dlm[6302]: store_plocks: clvmd store_plocks: r_count 0, lock_count 0, pp 40 bytes
Feb 24 20:44:23 xen2 cluster-dlm[6302]: store_plocks: clvmd store_plocks: total 0 bytes, max_section 0 bytes
Feb 24 20:44:23 xen2 cluster-dlm[6302]: store_plocks: clvmd store_plocks: open ckpt handle 7545e14600000000
Feb 24 20:44:23 xen2 cluster-dlm[6302]: send_info: clvmd send_plocks_stored cg 2 flags 2 counts 1 2 1 0 0
Feb 24 20:44:23 xen2 cluster-dlm[6302]: receive_plocks_stored: clvmd receive_plocks_stored 112:2 need_plocks 0
Feb 24 20:44:26 xen2 crmd: [5537]: info: match_graph_event: Action clvm:1_start_0 (22) confirmed on xen1 (rc=0)
Feb 24 20:44:26 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 23: monitor clvm:1_monitor_10000 on xen1
Feb 24 20:44:26 xen2 crmd: [5537]: info: te_pseudo_action: Pseudo action 25 fired and confirmed
Feb 24 20:44:26 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 28: start LVMforVM1_start_0 on xen2 (local)
Feb 24 20:44:26 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=28:38:0:e6c42e56-088a-4674-b420-201efc520279 op=LVMforVM1_start_0 )
Feb 24 20:44:26 xen2 lrmd: [5534]: info: rsc:LVMforVM1:28: start
Feb 24 20:44:26 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 30: start LVMforVM2_start_0 on xen1
Feb 24 20:44:26 xen2 crmd: [5537]: info: match_graph_event: Action clvm:1_monitor_10000 (23) confirmed on xen1 (rc=0)
Feb 24 20:44:26 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:start:stderr) logd is not running
Feb 24 20:44:26 xen2 LVM[6362]: INFO: Activating volume group /dev/vm1
Feb 24 20:44:26 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:start:stderr) 2010/02/24_20:44:26 INFO: Activating volume group /dev/vm1
Feb 24 20:44:27 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:start:stderr) logd is not running
Feb 24 20:44:27 xen2 LVM[6362]: INFO: Reading all physical volumes. This may take a while... Found volume group "vm2" using metadata type lvm2 Found volume group "vm1" using metadata type lvm2
Feb 24 20:44:27 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:start:stderr) 2010/02/24_20:44:27 INFO: Reading all physical volumes. This may take a while... Found volume group "vm2" using metadata type lvm2 Found volume group "vm1" using metadata type lvm2
Feb 24 20:44:27 xen2 crmd: [5537]: info: match_graph_event: Action LVMforVM2_start_0 (30) confirmed on xen1 (rc=0)
Feb 24 20:44:27 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 31: monitor LVMforVM2_monitor_10000 on xen1
Feb 24 20:44:27 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:start:stderr) logd is not running
Feb 24 20:44:27 xen2 LVM[6362]: INFO: 1 logical volume(s) in volume group "vm1" now active
Feb 24 20:44:27 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:start:stderr) 2010/02/24_20:44:27 INFO: 1 logical volume(s) in volume group "vm1" now active
Feb 24 20:44:27 xen2 crmd: [5537]: info: match_graph_event: Action LVMforVM2_monitor_10000 (31) confirmed on xen1 (rc=0)
Feb 24 20:44:27 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:start:stderr) Using volume group(s) on command line
Feb 24 20:44:27 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:start:stderr) Finding volume group "vm1"
Feb 24 20:44:27 xen2 crmd: [5537]: info: process_lrm_event: LRM operation LVMforVM1_start_0 (call=28, rc=0, cib-update=122, confirmed=true) ok
Feb 24 20:44:27 xen2 crmd: [5537]: info: match_graph_event: Action LVMforVM1_start_0 (28) confirmed on xen2 (rc=0)
Feb 24 20:44:27 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 29: monitor LVMforVM1_monitor_10000 on xen2 (local)
Feb 24 20:44:27 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=29:38:0:e6c42e56-088a-4674-b420-201efc520279 op=LVMforVM1_monitor_10000 )
Feb 24 20:44:27 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 32: start vm1_start_0 on xen2 (local)
Feb 24 20:44:27 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=32:38:0:e6c42e56-088a-4674-b420-201efc520279 op=vm1_start_0 )
Feb 24 20:44:27 xen2 lrmd: [5534]: info: rsc:vm1:30: start
Feb 24 20:44:27 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:44:27 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:44:27 xen2 crmd: [5537]: info: process_lrm_event: LRM operation LVMforVM1_monitor_10000 (call=29, rc=0, cib-update=123, confirmed=false) ok
Feb 24 20:44:27 xen2 crmd: [5537]: info: match_graph_event: Action LVMforVM1_monitor_10000 (29) confirmed on xen2 (rc=0)
Feb 24 20:44:28 xen2 lrmd: [5534]: info: RA output: (vm1:start:stdout) Using config file "/etc/xen/vm/sles11".
Feb 24 20:44:30 xen2 logger: /etc/xen/scripts/block: add XENBUS_PATH=backend/vbd/1/51712
Feb 24 20:44:30 xen2 logger: /etc/xen/scripts/block: add XENBUS_PATH=backend/vbd/1/51728
Feb 24 20:44:30 xen2 logger: /etc/xen/scripts/vif-bridge: online XENBUS_PATH=backend/vif/1/0
Feb 24 20:44:30 xen2 ifup: vif1.0
Feb 24 20:44:30 xen2 ifup: No configuration found for vif1.0
Feb 24 20:44:30 xen2 logger: /etc/xen/scripts/vif-bridge: Successful vif-bridge online for vif1.0, bridge br0.
Feb 24 20:44:30 xen2 logger: /etc/xen/scripts/vif-bridge: Writing backend/vif/1/0/hotplug-status connected to xenstore.
Feb 24 20:44:30 xen2 logger: /etc/xen/scripts/block: Writing backend/vbd/1/51728/node /dev/loop0 to xenstore.
Feb 24 20:44:30 xen2 logger: /etc/xen/scripts/block: Writing backend/vbd/1/51728/physical-device 7:0 to xenstore.
Feb 24 20:44:30 xen2 logger: /etc/xen/scripts/block: Writing backend/vbd/1/51728/hotplug-status connected to xenstore.
Feb 24 20:44:31 xen2 logger: /etc/xen/scripts/block: Writing backend/vbd/1/51712/physical-device fd:6 to xenstore.
Feb 24 20:44:31 xen2 logger: /etc/xen/scripts/block: Writing backend/vbd/1/51712/hotplug-status connected to xenstore.
Feb 24 20:44:31 xen2 lrmd: [5534]: info: RA output: (vm1:start:stdout) Started domain sles11
Feb 24 20:44:33 xen2 crmd: [5537]: info: process_lrm_event: LRM operation vm1_start_0 (call=30, rc=0, cib-update=124, confirmed=true) ok
Feb 24 20:44:33 xen2 crmd: [5537]: info: match_graph_event: Action vm1_start_0 (32) confirmed on xen2 (rc=0)
Feb 24 20:44:33 xen2 crmd: [5537]: info: te_rsc_command: Initiating action 33: monitor vm1_monitor_10000 on xen2 (local)
Feb 24 20:44:33 xen2 crmd: [5537]: info: do_lrm_rsc_op: Performing key=33:38:0:e6c42e56-088a-4674-b420-201efc520279 op=vm1_monitor_10000 )
Feb 24 20:44:33 xen2 crmd: [5537]: info: process_lrm_event: LRM operation vm1_monitor_10000 (call=31, rc=0, cib-update=125, confirmed=false) ok
Feb 24 20:44:33 xen2 crmd: [5537]: info: match_graph_event: Action vm1_monitor_10000 (33) confirmed on xen2 (rc=0)
Feb 24 20:44:33 xen2 crmd: [5537]: info: run_graph: ====================================================
Feb 24 20:44:33 xen2 crmd: [5537]: notice: run_graph: Transition 38 (Complete=27, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-41.bz2): Complete
Feb 24 20:44:33 xen2 crmd: [5537]: info: te_graph_trigger: Transition 38 is now complete
Feb 24 20:44:33 xen2 crmd: [5537]: info: notify_crmd: Transition 38 status: done - <null>
Feb 24 20:44:33 xen2 crmd: [5537]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 24 20:44:33 xen2 crmd: [5537]: info: do_state_transition: Starting PEngine Recheck Timer
Feb 24 20:44:37 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:44:37 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:44:48 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:44:48 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:44:58 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:44:58 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:45:08 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:45:08 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:45:18 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:45:18 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:45:28 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:45:28 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:45:38 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:45:38 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:45:49 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:45:49 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:45:59 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:45:59 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:46:09 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:46:09 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:46:19 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:46:19 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:46:29 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:46:29 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:46:40 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:46:40 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:46:50 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:46:50 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:47:00 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:47:00 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:47:10 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:47:10 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:47:20 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:47:20 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:47:30 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:47:30 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:47:41 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:47:41 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:47:51 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:47:51 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:48:01 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:48:01 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:48:11 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:48:11 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:48:21 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:48:21 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:48:31 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:48:31 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:48:41 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:48:41 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:48:52 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:48:52 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:49:02 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:49:02 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:49:12 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:49:12 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:49:22 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:49:22 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:49:32 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:49:32 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:49:43 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:49:43 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:49:53 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:49:53 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:50:03 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:50:03 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:50:13 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:50:13 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:50:23 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:50:23 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:50:33 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:50:33 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:50:40 xen2 sshd[8534]: Accepted keyboard-interactive/pam for root from 192.168.1.56 port 36565 ssh2
Feb 24 20:50:44 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:50:44 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:50:54 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Using volume group(s) on command line
Feb 24 20:50:54 xen2 lrmd: [5534]: info: RA output: (LVMforVM1:monitor:stderr) Finding volume group "vm1"
Feb 24 20:50:58 xen2 sshd[8657]: Accepted keyboard-interactive/pam for root from 192.168.1.56 port 36566 ssh2
More information about the Pacemaker
mailing list