<html><body>
<p><font size="2" face="sans-serif">Hi Andrew,</font><br>
<br>
<font size="2" face="sans-serif">Thank you for your quick response. This time, I completely shut down ha4 and then started corosync and pacemaker on ha3. However, the problem still persisted. It's my understanding that using requires="nothing" or prereq="nothing" should allow the cluster to start resources without needing to fence. Is this not correct?</font><br>
<br>
<br>
<font size="2" face="sans-serif">[root@ha3 ~]# cat /tmp/cib.xml </font><br>
<font size="2" face="sans-serif"><cib epoch="216" num_updates="9" admin_epoch="0" validate-with="pacemaker-1.2" cib-last-written="Thu Jun 12 21:25:13 2014" crm_feature_set="3.0.8" update-origin="ha3" update-client="crmd" have-quorum="0" dc-uuid="168427534"></font><br>
<font size="2" face="sans-serif"> <configuration></font><br>
<font size="2" face="sans-serif"> <crm_config></font><br>
<font size="2" face="sans-serif"> <cluster_property_set id="cib-bootstrap-options"></font><br>
<font size="2" face="sans-serif"> <nvpair name="symmetric-cluster" value="true" id="cib-bootstrap-options-symmetric-cluster"/></font><br>
<font size="2" face="sans-serif"> <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled"/></font><br>
<font size="2" face="sans-serif"> <nvpair name="stonith-action" value="reboot" id="cib-bootstrap-options-stonith-action"/></font><br>
<font size="2" face="sans-serif"> <nvpair name="no-quorum-policy" value="ignore" id="cib-bootstrap-options-no-quorum-policy"/></font><br>
<font size="2" face="sans-serif"> <nvpair name="stop-orphan-resources" value="true" id="cib-bootstrap-options-stop-orphan-resources"/></font><br>
<font size="2" face="sans-serif"> <nvpair name="stop-orphan-actions" value="true" id="cib-bootstrap-options-stop-orphan-actions"/></font><br>
<font size="2" face="sans-serif"> <nvpair name="default-action-timeout" value="20s" id="cib-bootstrap-options-default-action-timeout"/></font><br>
<font size="2" face="sans-serif"> <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-9d39a6b"/></font><br>
<font size="2" face="sans-serif"> <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/></font><br>
<font size="2" face="sans-serif"> </cluster_property_set></font><br>
<font size="2" face="sans-serif"> </crm_config></font><br>
<font size="2" face="sans-serif"> <nodes></font><br>
<font size="2" face="sans-serif"> <node id="168427534" uname="ha3"/></font><br>
<font size="2" face="sans-serif"> <node id="168427535" uname="ha4"/></font><br>
<font size="2" face="sans-serif"> </nodes></font><br>
<font size="2" face="sans-serif"> <resources></font><br>
<font size="2" face="sans-serif"> <primitive id="ha3_fabric_ping" class="ocf" provider="pacemaker" type="ping"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="ha3_fabric_ping-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="host_list" value="10.10.0.1" id="ha3_fabric_ping-instance_attributes-host_list"/></font><br>
<font size="2" face="sans-serif"> <nvpair name="failure_score" value="1" id="ha3_fabric_ping-instance_attributes-failure_score"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> <operations></font><br>
<font size="2" face="sans-serif"> <op name="start" timeout="60s" requires="nothing" interval="0" id="ha3_fabric_ping-start-0"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="ha3_fabric_ping-start-0-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="prereq" value="nothing" id="ha3_fabric_ping-start-0-instance_attributes-prereq"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </op></font><br>
<font size="2" face="sans-serif"> <op name="monitor" interval="15s" requires="nothing" timeout="15s" id="ha3_fabric_ping-monitor-15s"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="ha3_fabric_ping-monitor-15s-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="prereq" value="nothing" id="ha3_fabric_ping-monitor-15s-instance_attributes-prereq"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </op></font><br>
<font size="2" face="sans-serif"> <op name="stop" on-fail="fence" requires="nothing" interval="0" id="ha3_fabric_ping-stop-0"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="ha3_fabric_ping-stop-0-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="prereq" value="nothing" id="ha3_fabric_ping-stop-0-instance_attributes-prereq"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </op></font><br>
<font size="2" face="sans-serif"> </operations></font><br>
<font size="2" face="sans-serif"> </primitive></font><br>
<font size="2" face="sans-serif"> <primitive id="ha4_fabric_ping" class="ocf" provider="pacemaker" type="ping"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="ha4_fabric_ping-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="host_list" value="10.10.0.1" id="ha4_fabric_ping-instance_attributes-host_list"/></font><br>
<font size="2" face="sans-serif"> <nvpair name="failure_score" value="1" id="ha4_fabric_ping-instance_attributes-failure_score"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> <operations></font><br>
<font size="2" face="sans-serif"> <op name="start" timeout="60s" requires="nothing" interval="0" id="ha4_fabric_ping-start-0"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="ha4_fabric_ping-start-0-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="prereq" value="nothing" id="ha4_fabric_ping-start-0-instance_attributes-prereq"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </op></font><br>
<font size="2" face="sans-serif"> <op name="monitor" interval="15s" requires="nothing" timeout="15s" id="ha4_fabric_ping-monitor-15s"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="ha4_fabric_ping-monitor-15s-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="prereq" value="nothing" id="ha4_fabric_ping-monitor-15s-instance_attributes-prereq"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </op></font><br>
<font size="2" face="sans-serif"> <op name="stop" on-fail="fence" requires="nothing" interval="0" id="ha4_fabric_ping-stop-0"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="ha4_fabric_ping-stop-0-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="prereq" value="nothing" id="ha4_fabric_ping-stop-0-instance_attributes-prereq"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </op></font><br>
<font size="2" face="sans-serif"> </operations></font><br>
<font size="2" face="sans-serif"> </primitive></font><br>
<font size="2" face="sans-serif"> <primitive id="fencing_route_to_ha3" class="stonith" type="meatware"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="fencing_route_to_ha3-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="hostlist" value="ha3" id="fencing_route_to_ha3-instance_attributes-hostlist"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> <operations></font><br>
<font size="2" face="sans-serif"> <op name="start" requires="nothing" interval="0" id="fencing_route_to_ha3-start-0"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="fencing_route_to_ha3-start-0-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="prereq" value="nothing" id="fencing_route_to_ha3-start-0-instance_attributes-prereq"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </op></font><br>
<font size="2" face="sans-serif"> <op name="monitor" requires="nothing" interval="0" id="fencing_route_to_ha3-monitor-0"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="fencing_route_to_ha3-monitor-0-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="prereq" value="nothing" id="fencing_route_to_ha3-monitor-0-instance_attributes-prereq"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </op></font><br>
<font size="2" face="sans-serif"> </operations></font><br>
<font size="2" face="sans-serif"> </primitive></font><br>
<font size="2" face="sans-serif"> <primitive id="fencing_route_to_ha4" class="stonith" type="meatware"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="fencing_route_to_ha4-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="hostlist" value="ha4" id="fencing_route_to_ha4-instance_attributes-hostlist"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> <operations></font><br>
<font size="2" face="sans-serif"> <op name="start" requires="nothing" interval="0" id="fencing_route_to_ha4-start-0"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="fencing_route_to_ha4-start-0-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="prereq" value="nothing" id="fencing_route_to_ha4-start-0-instance_attributes-prereq"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </op></font><br>
<font size="2" face="sans-serif"> <op name="monitor" requires="nothing" interval="0" id="fencing_route_to_ha4-monitor-0"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="fencing_route_to_ha4-monitor-0-instance_attributes"></font><br>
<font size="2" face="sans-serif"> <nvpair name="prereq" value="nothing" id="fencing_route_to_ha4-monitor-0-instance_attributes-prereq"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </op></font><br>
<font size="2" face="sans-serif"> </operations></font><br>
<font size="2" face="sans-serif"> </primitive></font><br>
<font size="2" face="sans-serif"> </resources></font><br>
<font size="2" face="sans-serif"> <constraints></font><br>
<font size="2" face="sans-serif"> <rsc_location id="ha3_fabric_ping_location" rsc="ha3_fabric_ping" score="INFINITY" node="ha3"/></font><br>
<font size="2" face="sans-serif"> <rsc_location id="ha3_fabric_ping_not_location" rsc="ha3_fabric_ping" score="-INFINITY" node="ha4"/></font><br>
<font size="2" face="sans-serif"> <rsc_location id="ha4_fabric_ping_location" rsc="ha4_fabric_ping" score="INFINITY" node="ha4"/></font><br>
<font size="2" face="sans-serif"> <rsc_location id="ha4_fabric_ping_not_location" rsc="ha4_fabric_ping" score="-INFINITY" node="ha3"/></font><br>
<font size="2" face="sans-serif"> <rsc_location id="fencing_route_to_ha4_location" rsc="fencing_route_to_ha4" score="INFINITY" node="ha3"/></font><br>
<font size="2" face="sans-serif"> <rsc_location id="fencing_route_to_ha4_not_location" rsc="fencing_route_to_ha4" score="-INFINITY" node="ha4"/></font><br>
<font size="2" face="sans-serif"> <rsc_location id="fencing_route_to_ha3_location" rsc="fencing_route_to_ha3" score="INFINITY" node="ha4"/></font><br>
<font size="2" face="sans-serif"> <rsc_location id="fencing_route_to_ha3_not_location" rsc="fencing_route_to_ha3" score="-INFINITY" node="ha3"/></font><br>
<font size="2" face="sans-serif"> <rsc_order id="ha3_fabric_ping_before_fencing_route_to_ha4" score="INFINITY" first="ha3_fabric_ping" first-action="start" then="fencing_route_to_ha4" then-action="start"/></font><br>
<font size="2" face="sans-serif"> <rsc_order id="ha4_fabric_ping_before_fencing_route_to_ha3" score="INFINITY" first="ha4_fabric_ping" first-action="start" then="fencing_route_to_ha3" then-action="start"/></font><br>
<font size="2" face="sans-serif"> </constraints></font><br>
<font size="2" face="sans-serif"> <rsc_defaults></font><br>
<font size="2" face="sans-serif"> <meta_attributes id="rsc-options"></font><br>
<font size="2" face="sans-serif"> <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/></font><br>
<font size="2" face="sans-serif"> <nvpair name="migration-threshold" value="0" id="rsc-options-migration-threshold"/></font><br>
<font size="2" face="sans-serif"> <nvpair name="is-managed" value="true" id="rsc-options-is-managed"/></font><br>
<font size="2" face="sans-serif"> </meta_attributes></font><br>
<font size="2" face="sans-serif"> </rsc_defaults></font><br>
<font size="2" face="sans-serif"> </configuration></font><br>
<font size="2" face="sans-serif"> <status></font><br>
<font size="2" face="sans-serif"> <node_state id="168427534" uname="ha3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member"></font><br>
<font size="2" face="sans-serif"> <lrm id="168427534"></font><br>
<font size="2" face="sans-serif"> <lrm_resources></font><br>
<font size="2" face="sans-serif"> <lrm_resource id="ha3_fabric_ping" type="ping" class="ocf" provider="pacemaker"></font><br>
<font size="2" face="sans-serif"> <lrm_rsc_op id="ha3_fabric_ping_last_0" operation_key="ha3_fabric_ping_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="4:1:7:a2fa2eff-30ee-4f05-a458-7d23b0fa4c95" transition-magic="0:7;4:1:7:a2fa2eff-30ee-4f05-a458-7d23b0fa4c95" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1402626507" last-rc-change="1402626507" exec-time="42" queue-time="0" op-digest="91b00b3fe95f23582466d18e42c4fd58"/></font><br>
<font size="2" face="sans-serif"> </lrm_resource></font><br>
<font size="2" face="sans-serif"> <lrm_resource id="ha4_fabric_ping" type="ping" class="ocf" provider="pacemaker"></font><br>
<font size="2" face="sans-serif"> <lrm_rsc_op id="ha4_fabric_ping_last_0" operation_key="ha4_fabric_ping_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="5:1:7:a2fa2eff-30ee-4f05-a458-7d23b0fa4c95" transition-magic="0:7;5:1:7:a2fa2eff-30ee-4f05-a458-7d23b0fa4c95" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1402626507" last-rc-change="1402626507" exec-time="8" queue-time="0" op-digest="91b00b3fe95f23582466d18e42c4fd58"/></font><br>
<font size="2" face="sans-serif"> </lrm_resource></font><br>
<font size="2" face="sans-serif"> <lrm_resource id="fencing_route_to_ha3" type="meatware" class="stonith"></font><br>
<font size="2" face="sans-serif"> <lrm_rsc_op id="fencing_route_to_ha3_last_0" operation_key="fencing_route_to_ha3_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="6:1:7:a2fa2eff-30ee-4f05-a458-7d23b0fa4c95" transition-magic="0:7;6:1:7:a2fa2eff-30ee-4f05-a458-7d23b0fa4c95" call-id="13" rc-code="7" op-status="0" interval="0" last-run="1402626507" last-rc-change="1402626507" exec-time="0" queue-time="0" op-digest="502fbd7a2366c2be772d7fbecc9e0351"/></font><br>
<font size="2" face="sans-serif"> </lrm_resource></font><br>
<font size="2" face="sans-serif"> <lrm_resource id="fencing_route_to_ha4" type="meatware" class="stonith"></font><br>
<font size="2" face="sans-serif"> <lrm_rsc_op id="fencing_route_to_ha4_last_0" operation_key="fencing_route_to_ha4_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="7:1:7:a2fa2eff-30ee-4f05-a458-7d23b0fa4c95" transition-magic="0:7;7:1:7:a2fa2eff-30ee-4f05-a458-7d23b0fa4c95" call-id="17" rc-code="7" op-status="0" interval="0" last-run="1402626507" last-rc-change="1402626507" exec-time="0" queue-time="0" op-digest="5be26fbcfd648e3d545d0115645dde76"/></font><br>
<font size="2" face="sans-serif"> </lrm_resource></font><br>
<font size="2" face="sans-serif"> </lrm_resources></font><br>
<font size="2" face="sans-serif"> </lrm></font><br>
<font size="2" face="sans-serif"> <transient_attributes id="168427534"></font><br>
<font size="2" face="sans-serif"> <instance_attributes id="status-168427534"></font><br>
<font size="2" face="sans-serif"> <nvpair id="status-168427534-shutdown" name="shutdown" value="0"/></font><br>
<font size="2" face="sans-serif"> <nvpair id="status-168427534-probe_complete" name="probe_complete" value="true"/></font><br>
<font size="2" face="sans-serif"> </instance_attributes></font><br>
<font size="2" face="sans-serif"> </transient_attributes></font><br>
<font size="2" face="sans-serif"> </node_state></font><br>
<font size="2" face="sans-serif"> </status></font><br>
<font size="2" face="sans-serif"></cib></font><br>
<br>
<br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 systemd: Starting LSB: Starts and stops Corosync Cluster Engine....</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4346]: [MAIN ] Corosync Cluster Engine ('2.3.3'): started and ready to provide service.</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4346]: [MAIN ] Corosync built-in features: pie relro bindnow</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [TOTEM ] Initializing transport (UDP/IP Unicast).</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [TOTEM ] The network interface [10.10.0.14] is now up.</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [SERV ] Service engine loaded: corosync configuration map access [0]</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [QB ] server name: cmap</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [SERV ] Service engine loaded: corosync configuration service [1]</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [QB ] server name: cfg</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [QB ] server name: cpg</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [SERV ] Service engine loaded: corosync profile loading service [4]</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [QUORUM] Using quorum provider corosync_votequorum</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [QB ] server name: votequorum</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [QB ] server name: quorum</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [TOTEM ] adding new UDPU member {10.10.0.14}</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [TOTEM ] adding new UDPU member {10.10.0.15}</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [TOTEM ] A new membership (10.10.0.14:980) was formed. Members joined: 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [QUORUM] Members[1]: 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:27:59 ha3 corosync[4347]: [MAIN ] Completed service synchronization, ready to provide service.</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:00 ha3 corosync: Starting Corosync Cluster Engine (corosync): [ OK ]</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:00 ha3 systemd: Started LSB: Starts and stops Corosync Cluster Engine..</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 systemd: Starting LSB: Starts and stops Pacemaker Cluster Manager....</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 pacemaker: Starting Pacemaker Cluster Manager</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 pacemakerd[4375]: notice: mcp_read_config: Configured corosync to accept connections from group 1000: OK (1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 pacemakerd[4375]: notice: main: Starting Pacemaker 1.1.10 (Build: 9d39a6b): agent-manpages ncurses libqb-logging libqb-ipc lha-fencing nagios corosync-native libesmtp</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 pacemakerd[4375]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 pacemakerd[4375]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 pacemakerd[4375]: notice: cluster_connect_quorum: Quorum lost</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 pacemakerd[4375]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 pacemakerd[4375]: notice: get_node_name: Defaulting to uname -n for the local corosync node name</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 pacemakerd[4375]: notice: crm_update_peer_state: pcmk_quorum_notification: Node ha3[168427534] - state is now member (was (null))</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 pengine[4381]: warning: crm_is_writable: /var/lib/pacemaker/pengine should be owned and r/w by group haclient</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 cib[4377]: warning: crm_is_writable: /var/lib/pacemaker/cib should be owned and r/w by group haclient</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 cib[4377]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 stonith-ng[4378]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 crmd[4382]: notice: main: CRM Git Version: 9d39a6b</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 crmd[4382]: warning: crm_is_writable: /var/lib/pacemaker/pengine should be owned and r/w by group haclient</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 crmd[4382]: warning: crm_is_writable: /var/lib/pacemaker/cib should be owned and r/w by group haclient</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 attrd[4380]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 attrd[4380]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 attrd[4380]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 attrd[4380]: notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[168427534] - state is now member (was (null))</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 attrd[4380]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 attrd[4380]: notice: get_node_name: Defaulting to uname -n for the local corosync node name</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 stonith-ng[4378]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 stonith-ng[4378]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 stonith-ng[4378]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 stonith-ng[4378]: notice: get_node_name: Defaulting to uname -n for the local corosync node name</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 cib[4377]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 cib[4377]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 cib[4377]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:05 ha3 cib[4377]: notice: get_node_name: Defaulting to uname -n for the local corosync node name</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: get_node_name: Defaulting to uname -n for the local corosync node name</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 stonith-ng[4378]: notice: setup_cib: Watching for stonith topology changes</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 stonith-ng[4378]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: cluster_connect_quorum: Quorum lost</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: crm_update_peer_state: pcmk_quorum_notification: Node ha3[168427534] - state is now member (was (null))</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: get_node_name: Defaulting to uname -n for the local corosync node name</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: do_started: The local CRM is operational</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:06 ha3 crmd[4382]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:07 ha3 stonith-ng[4378]: notice: stonith_device_register: Added 'fencing_route_to_ha4' to the device list (1 active devices)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:10 ha3 pacemaker: Starting Pacemaker Cluster Manager[ OK ]</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:10 ha3 systemd: Started LSB: Starts and stops Pacemaker Cluster Manager..</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: warning: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 cib[4377]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 cib[4377]: notice: get_node_name: Defaulting to uname -n for the local corosync node name</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 attrd[4380]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 attrd[4380]: notice: get_node_name: Defaulting to uname -n for the local corosync node name</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 attrd[4380]: notice: write_attribute: Sent update 2 with 1 changes for terminate, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 attrd[4380]: notice: write_attribute: Sent update 3 with 1 changes for shutdown, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 attrd[4380]: notice: attrd_cib_callback: Update 2 for terminate[ha3]=(null): OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 attrd[4380]: notice: attrd_cib_callback: Update 3 for shutdown[ha3]=0: OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 pengine[4381]: warning: stage6: Scheduling Node ha4 for STONITH</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 pengine[4381]: notice: LogActions: Start ha3_fabric_ping        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 pengine[4381]: warning: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-warn-89.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 pengine[4381]: warning: stage6: Scheduling Node ha4 for STONITH</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 pengine[4381]: notice: LogActions: Start ha3_fabric_ping        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 pengine[4381]: warning: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-warn-90.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 4: monitor ha3_fabric_ping_monitor_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: notice: te_fence_node: Executing reboot fencing operation (12) on ha4 (timeout=60000)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 stonith-ng[4378]: notice: handle_request: Client crmd.4382.407ee05a wants to fence (reboot) 'ha4' with device '(any)'</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 stonith-ng[4378]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for ha4: eefd3564-988e-408a-a423-1be83ef5bdbc (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 stonith-ng[4378]: notice: corosync_node_name: Unable to get node name for nodeid 168427534</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 stonith-ng[4378]: notice: get_node_name: Defaulting to uname -n for the local corosync node name</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 stonith: [4393]: info: parse config info info=ha4</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 stonith-ng[4378]: notice: can_fence_host_with_device: fencing_route_to_ha4 can fence ha4: dynamic-list</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 stonith: [4398]: info: parse config info info=ha4</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 stonith: [4398]: CRIT: OPERATOR INTERVENTION REQUIRED to reset ha4.</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 stonith: [4398]: CRIT: Run "meatclient -c ha4" AFTER power-cycling the machine.</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: notice: process_lrm_event: LRM operation ha3_fabric_ping_monitor_0 (call=5, rc=7, cib-update=25, confirmed=true) not running</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 5: monitor ha4_fabric_ping_monitor_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: notice: process_lrm_event: LRM operation ha4_fabric_ping_monitor_0 (call=9, rc=7, cib-update=26, confirmed=true) not running</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 6: monitor fencing_route_to_ha3_monitor_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 7: monitor fencing_route_to_ha4_monitor_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 3: probe_complete probe_complete on ha3 (local) - no waiting</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 attrd[4380]: notice: write_attribute: Sent update 4 with 1 changes for probe_complete, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:28:27 ha3 attrd[4380]: notice: attrd_cib_callback: Update 4 for probe_complete[ha3]=true: OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith-ng[4378]: notice: stonith_action_async_done: Child process 4395 performing action 'reboot' timed out with signal 15</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith-ng[4378]: error: log_operation: Operation 'reboot' [4395] (call 2 from crmd.4382) for host 'ha4' with device 'fencing_route_to_ha4' returned: -62 (Timer expired)</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith-ng[4378]: warning: log_operation: fencing_route_to_ha4:4395 [ Performing: stonith -t meatware -T reset ha4 ]</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith-ng[4378]: notice: stonith_choose_peer: Couldn't find anyone to fence ha4 with <any><
/font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith-ng[4378]: error: remote_op_done: Operation reboot of ha4 by ha3 for crmd.4382@ha3.eefd3564: No route to host</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 crmd[4382]: notice: tengine_stonith_callback: Stonith operation 2/12:1:0:a2fa2eff-30ee-4f05-a458-7d23b0fa4c95: No route to host (-113)</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 crmd[4382]: notice: tengine_stonith_callback: Stonith operation 2 for ha4 failed (No route to host): aborting transition.</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 crmd[4382]: notice: tengine_stonith_notify: Peer ha4 was not terminated (reboot) by ha3 for ha3: No route to host (ref=eefd3564-988e-408a-a423-1be83ef5bdbc) by client crmd.4382</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 crmd[4382]: notice: run_graph: Transition 1 (Complete=7, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-90.bz2): Stopped</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 pengine[4381]: warning: stage6: Scheduling Node ha4 for STONITH</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 pengine[4381]: notice: LogActions: Start ha3_fabric_ping        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 pengine[4381]: warning: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-warn-91.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 crmd[4382]: notice: te_fence_node: Executing reboot fencing operation (8) on ha4 (timeout=60000)</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith-ng[4378]: notice: handle_request: Client crmd.4382.407ee05a wants to fence (reboot) 'ha4' with device '(any)'</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith-ng[4378]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for ha4: e5bee870-55de-4f22-b104-74556075cc99 (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith-ng[4378]: notice: can_fence_host_with_device: fencing_route_to_ha4 can fence ha4: dynamic-list</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith-ng[4378]: notice: can_fence_host_with_device: fencing_route_to_ha4 can fence ha4: dynamic-list</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith: [4426]: info: parse config info info=ha4</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith: [4426]: CRIT: OPERATOR INTERVENTION REQUIRED to reset ha4.</font><br>
<font size="2" face="sans-serif">Jun 12 21:29:27 ha3 stonith: [4426]: CRIT: Run "meatclient -c ha4" AFTER power-cycling the machine.</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:16 ha3 stonith: [4426]: info: node Meatware-reset: ha4</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:16 ha3 stonith-ng[4378]: notice: log_operation: Operation 'reboot' [4425] (call 3 from crmd.4382) for host 'ha4' with device 'fencing_route_to_ha4' returned: 0 (OK)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:16 ha3 stonith-ng[4378]: notice: remote_op_done: Operation reboot of ha4 by ha3 for crmd.4382@ha3.e5bee870: OK</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:16 ha3 crmd[4382]: notice: tengine_stonith_callback: Stonith operation 3/8:2:0:a2fa2eff-30ee-4f05-a458-7d23b0fa4c95: OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:16 ha3 crmd[4382]: notice: crm_update_peer_state: send_stonith_update: Node ha4[0] - state is now lost (was (null))</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:16 ha3 crmd[4382]: notice: tengine_stonith_notify: Peer ha4 was terminated (reboot) by ha3 for ha3: OK (ref=e5bee870-55de-4f22-b104-74556075cc99) by client crmd.4382</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:16 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 4: start ha3_fabric_ping_start_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 attrd[4380]: notice: write_attribute: Sent update 5 with 1 changes for pingd, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 attrd[4380]: notice: attrd_cib_callback: Update 5 for pingd[ha3]=0: OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 ping(ha3_fabric_ping)[4429]: WARNING: pingd is less than failure_score(1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 crmd[4382]: notice: process_lrm_event: LRM operation ha3_fabric_ping_start_0 (call=18, rc=1, cib-update=37, confirmed=true) unknown error</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 crmd[4382]: warning: status_from_rc: Action 4 (ha3_fabric_ping_start_0) on ha3 failed (target: 0 vs. rc: 1): Error</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 crmd[4382]: warning: update_failcount: Updating failcount for ha3_fabric_ping on ha3 after failed start: rc=1 (update=INFINITY, time=1402626636)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 crmd[4382]: warning: update_failcount: Updating failcount for ha3_fabric_ping on ha3 after failed start: rc=1 (update=INFINITY, time=1402626636)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 crmd[4382]: notice: run_graph: Transition 2 (Complete=4, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-91.bz2): Stopped</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 attrd[4380]: notice: write_attribute: Sent update 6 with 1 changes for fail-count-ha3_fabric_ping, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 attrd[4380]: notice: write_attribute: Sent update 7 with 1 changes for last-failure-ha3_fabric_ping, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 pengine[4381]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 pengine[4381]: notice: LogActions: Recover ha3_fabric_ping        (Started ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 pengine[4381]: notice: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-input-350.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 attrd[4380]: notice: attrd_cib_callback: Update 6 for fail-count-ha3_fabric_ping[ha3]=INFINITY: OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 attrd[4380]: notice: attrd_cib_callback: Update 7 for last-failure-ha3_fabric_ping[ha3]=1402626636: OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 pengine[4381]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 pengine[4381]: notice: LogActions: Recover ha3_fabric_ping        (Started ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 pengine[4381]: notice: process_pe_message: Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-351.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 1: stop ha3_fabric_ping_stop_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 crmd[4382]: notice: process_lrm_event: LRM operation ha3_fabric_ping_stop_0 (call=19, rc=0, cib-update=41, confirmed=true) ok</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:36 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 5: start ha3_fabric_ping_start_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:41 ha3 attrd[4380]: notice: write_attribute: Sent update 8 with 1 changes for pingd, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:41 ha3 attrd[4380]: notice: attrd_cib_callback: Update 8 for pingd[ha3]=(null): OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 ping(ha3_fabric_ping)[4476]: WARNING: pingd is less than failure_score(1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 crmd[4382]: notice: process_lrm_event: LRM operation ha3_fabric_ping_start_0 (call=20, rc=1, cib-update=42, confirmed=true) unknown error</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 crmd[4382]: warning: status_from_rc: Action 5 (ha3_fabric_ping_start_0) on ha3 failed (target: 0 vs. rc: 1): Error</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 crmd[4382]: warning: update_failcount: Updating failcount for ha3_fabric_ping on ha3 after failed start: rc=1 (update=INFINITY, time=1402626656)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 crmd[4382]: warning: update_failcount: Updating failcount for ha3_fabric_ping on ha3 after failed start: rc=1 (update=INFINITY, time=1402626656)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 crmd[4382]: notice: run_graph: Transition 4 (Complete=3, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-351.bz2): Stopped</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 attrd[4380]: notice: write_attribute: Sent update 9 with 1 changes for last-failure-ha3_fabric_ping, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 pengine[4381]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 pengine[4381]: notice: LogActions: Recover ha3_fabric_ping        (Started ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 pengine[4381]: notice: process_pe_message: Calculated Transition 5: /var/lib/pacemaker/pengine/pe-input-352.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 attrd[4380]: notice: attrd_cib_callback: Update 9 for last-failure-ha3_fabric_ping[ha3]=1402626656: OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 pengine[4381]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 pengine[4381]: notice: LogActions: Recover ha3_fabric_ping        (Started ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 pengine[4381]: notice: process_pe_message: Calculated Transition 6: /var/lib/pacemaker/pengine/pe-input-353.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 1: stop ha3_fabric_ping_stop_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 crmd[4382]: notice: process_lrm_event: LRM operation ha3_fabric_ping_stop_0 (call=21, rc=0, cib-update=45, confirmed=true) ok</font><br>
<font size="2" face="sans-serif">Jun 12 21:30:56 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 5: start ha3_fabric_ping_start_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:01 ha3 attrd[4380]: notice: write_attribute: Sent update 10 with 1 changes for pingd, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:01 ha3 attrd[4380]: notice: attrd_cib_callback: Update 10 for pingd[ha3]=(null): OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 ping(ha3_fabric_ping)[4522]: WARNING: pingd is less than failure_score(1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 crmd[4382]: notice: process_lrm_event: LRM operation ha3_fabric_ping_start_0 (call=22, rc=1, cib-update=46, confirmed=true) unknown error</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 crmd[4382]: warning: status_from_rc: Action 5 (ha3_fabric_ping_start_0) on ha3 failed (target: 0 vs. rc: 1): Error</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 crmd[4382]: warning: update_failcount: Updating failcount for ha3_fabric_ping on ha3 after failed start: rc=1 (update=INFINITY, time=1402626676)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 crmd[4382]: warning: update_failcount: Updating failcount for ha3_fabric_ping on ha3 after failed start: rc=1 (update=INFINITY, time=1402626676)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 crmd[4382]: notice: run_graph: Transition 6 (Complete=3, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-353.bz2): Stopped</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 attrd[4380]: notice: write_attribute: Sent update 11 with 1 changes for last-failure-ha3_fabric_ping, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 pengine[4381]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 pengine[4381]: notice: LogActions: Recover ha3_fabric_ping        (Started ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 pengine[4381]: notice: process_pe_message: Calculated Transition 7: /var/lib/pacemaker/pengine/pe-input-354.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 attrd[4380]: notice: attrd_cib_callback: Update 11 for last-failure-ha3_fabric_ping[ha3]=1402626676: OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 pengine[4381]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 pengine[4381]: notice: LogActions: Recover ha3_fabric_ping        (Started ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 pengine[4381]: notice: process_pe_message: Calculated Transition 8: /var/lib/pacemaker/pengine/pe-input-355.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 1: stop ha3_fabric_ping_stop_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 crmd[4382]: notice: process_lrm_event: LRM operation ha3_fabric_ping_stop_0 (call=23, rc=0, cib-update=49, confirmed=true) ok</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:16 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 5: start ha3_fabric_ping_start_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:21 ha3 attrd[4380]: notice: write_attribute: Sent update 12 with 1 changes for pingd, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:21 ha3 attrd[4380]: notice: attrd_cib_callback: Update 12 for pingd[ha3]=(null): OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 ping(ha3_fabric_ping)[4568]: WARNING: pingd is less than failure_score(1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 crmd[4382]: notice: process_lrm_event: LRM operation ha3_fabric_ping_start_0 (call=24, rc=1, cib-update=50, confirmed=true) unknown error</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 crmd[4382]: warning: status_from_rc: Action 5 (ha3_fabric_ping_start_0) on ha3 failed (target: 0 vs. rc: 1): Error</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 crmd[4382]: warning: update_failcount: Updating failcount for ha3_fabric_ping on ha3 after failed start: rc=1 (update=INFINITY, time=1402626696)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 crmd[4382]: warning: update_failcount: Updating failcount for ha3_fabric_ping on ha3 after failed start: rc=1 (update=INFINITY, time=1402626696)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 crmd[4382]: notice: run_graph: Transition 8 (Complete=3, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-355.bz2): Stopped</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 attrd[4380]: notice: write_attribute: Sent update 13 with 1 changes for last-failure-ha3_fabric_ping, id=<n/a>, set=(null)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 pengine[4381]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 pengine[4381]: notice: LogActions: Recover ha3_fabric_ping        (Started ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 pengine[4381]: notice: process_pe_message: Calculated Transition 9: /var/lib/pacemaker/pengine/pe-input-356.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 attrd[4380]: notice: attrd_cib_callback: Update 13 for last-failure-ha3_fabric_ping[ha3]=1402626696: OK (0)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 pengine[4381]: notice: unpack_config: On loss of CCM Quorum: Ignore</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 pengine[4381]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 pengine[4381]: notice: LogActions: Recover ha3_fabric_ping        (Started ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 pengine[4381]: notice: LogActions: Start fencing_route_to_ha4        (ha3)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 pengine[4381]: notice: process_pe_message: Calculated Transition 10: /var/lib/pacemaker/pengine/pe-input-357.bz2</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 crmd[4382]: notice: te_rsc_command: Initiating action 1: stop ha3_fabric_ping_stop_0 on ha3 (local)</font><br>
<font size="2" face="sans-serif">Jun 12 21:31:36 ha3 crmd[4382]: notice: process_lrm_event: LRM operation ha3_fabric_ping_stop_0 (call=25, rc=0, cib-update=53, confirmed=true) ok</font><br>
<br>
<font size="2" face="sans-serif">Paul Cain</font><br>
<br>
<img width="16" height="16" src="cid:3__=08BBF665DF9EB2948f9e8a93df938@us.ibm.com" border="0" alt="Inactive hide details for Andrew Beekhof ---06/12/2014 06:53:35 PM---From: Andrew Beekhof <andrew@beekhof.net> To: The Pacemake"><font size="2" color="#424282" face="sans-serif">Andrew Beekhof ---06/12/2014 06:53:35 PM---From: Andrew Beekhof <andrew@beekhof.net> To: The Pacemaker cluster resource manager <pacemaker@oss.clusterlabs.org></font><br>
<br>
<font size="1" color="#5F5F5F" face="sans-serif">From:        </font><font size="1" face="sans-serif">Andrew Beekhof <andrew@beekhof.net></font><br>
<font size="1" color="#5F5F5F" face="sans-serif">To:        </font><font size="1" face="sans-serif">The Pacemaker cluster resource manager <pacemaker@oss.clusterlabs.org></font><br>
<font size="1" color="#5F5F5F" face="sans-serif">Date:        </font><font size="1" face="sans-serif">06/12/2014 06:53 PM</font><br>
<font size="1" color="#5F5F5F" face="sans-serif">Subject:        </font><font size="1" face="sans-serif">Re: [Pacemaker] When stonith is enabled,        resources won't start until after stonith,        even though requires="nothing" and prereq="nothing" on RHEL 7        with        pacemaker-1.1.11 compiled from source.</font><br>
<hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br>
<br>
<br>
<br>
<tt><font size="2">> > </crm_config><br>
> > <nodes><br>
> > <node id="168427534" uname="ha3"/><br>
> > <node id="168427535" uname="ha4"/><br>
> > </nodes><br>
> > <resources><br>
> > <primitive id="ha3_fabric_ping" class="ocf" provider="pacemaker" type="ping"><br>
> > <instance_attributes id="ha3_fabric_ping-instance_attributes"><br>
> > <nvpair name="host_list" value="10.10.0.1" id="ha3_fabric_ping-instance_attributes-host_list"/><br>
> > <nvpair name="failure_score" value="1" id="ha3_fabric_ping-instance_attributes-failure_score"/><br>
> > </instance_attributes><br>
> > <operations><br>
> > <op name="start" timeout="60s" requires="nothing" on-fail="standby" interval="0" id="ha3_fabric_ping-start-0"><br>
> > <instance_attributes id="ha3_fabric_ping-start-0-instance_attributes"><br>
> > <nvpair name="prereq" value="nothing" id="ha3_fabric_ping-start-0-instance_attributes-prereq"/><br>
> > </instance_attributes><br>
> > </op><br>
> > <op name="monitor" interval="15s" requires="nothing" on-fail="standby" timeout="15s" id="ha3_fabric_ping-monitor-15s"><br>
> > <instance_attributes id="ha3_fabric_ping-monitor-15s-instance_attributes"><br>
> > <nvpair name="prereq" value="nothing" id="ha3_fabric_ping-monitor-15s-instance_attributes-prereq"/><br>
> > </instance_attributes><br>
> > </op><br>
> > <op name="stop" on-fail="fence" requires="nothing" interval="0" id="ha3_fabric_ping-stop-0"><br>
> > <instance_attributes id="ha3_fabric_ping-stop-0-instance_attributes"><br>
> > <nvpair name="prereq" value="nothing" id="ha3_fabric_ping-stop-0-instance_attributes-prereq"/><br>
> > </instance_attributes><br>
> > </op><br>
> > </operations><br>
> > <meta_attributes id="ha3_fabric_ping-meta_attributes"><br>
> > <nvpair id="ha3_fabric_ping-meta_attributes-requires" name="requires" value="nothing"/><br>
> > </meta_attributes><br>
> > </primitive><br>
> > <primitive id="ha4_fabric_ping" class="ocf" provider="pacemaker" type="ping"><br>
> > <instance_attributes id="ha4_fabric_ping-instance_attributes"><br>
> > <nvpair name="host_list" value="10.10.0.1" id="ha4_fabric_ping-instance_attributes-host_list"/><br>
> > <nvpair name="failure_score" value="1" id="ha4_fabric_ping-instance_attributes-failure_score"/><br>
> > </instance_attributes><br>
> > <operations><br>
> > <op name="start" timeout="60s" requires="nothing" on-fail="standby" interval="0" id="ha4_fabric_ping-start-0"><br>
> > <instance_attributes id="ha4_fabric_ping-start-0-instance_attributes"><br>
> > <nvpair name="prereq" value="nothing" id="ha4_fabric_ping-start-0-instance_attributes-prereq"/><br>
> > </instance_attributes><br>
> > </op><br>
> > <op name="monitor" interval="15s" requires="nothing" on-fail="standby" timeout="15s" id="ha4_fabric_ping-monitor-15s"><br>
> > <instance_attributes id="ha4_fabric_ping-monitor-15s-instance_attributes"><br>
> > <nvpair name="prereq" value="nothing" id="ha4_fabric_ping-monitor-15s-instance_attributes-prereq"/><br>
> > </instance_attributes><br>
> > </op><br>
> > <op name="stop" on-fail="fence" requires="nothing" interval="0" id="ha4_fabric_ping-stop-0"><br>
> > <instance_attributes id="ha4_fabric_ping-stop-0-instance_attributes"><br>
> > <nvpair name="prereq" value="nothing" id="ha4_fabric_ping-stop-0-instance_attributes-prereq"/><br>
> > </instance_attributes><br>
> > </op><br>
> > </operations><br>
> > <meta_attributes id="ha4_fabric_ping-meta_attributes"><br>
> > <nvpair id="ha4_fabric_ping-meta_attributes-requires" name="requires" value="nothing"/><br>
> > </meta_attributes><br>
> > </primitive><br>
> > <primitive id="fencing_route_to_ha3" class="stonith" type="meatware"><br>
> > <instance_attributes id="fencing_route_to_ha3-instance_attributes"><br>
> > <nvpair name="hostlist" value="ha3" id="fencing_route_to_ha3-instance_attributes-hostlist"/><br>
> > </instance_attributes><br>
> > <operations><br>
> > <op name="start" requires="nothing" interval="0" id="fencing_route_to_ha3-start-0"><br>
> > <instance_attributes id="fencing_route_to_ha3-start-0-instance_attributes"><br>
> > <nvpair name="prereq" value="nothing" id="fencing_route_to_ha3-start-0-instance_attributes-prereq"/><br>
> > </instance_attributes><br>
> > </op><br>
> > <op name="monitor" requires="nothing" interval="0" id="fencing_route_to_ha3-monitor-0"><br>
> > <instance_attributes id="fencing_route_to_ha3-monitor-0-instance_attributes"><br>
> > <nvpair name="prereq" value="nothing" id="fencing_route_to_ha3-monitor-0-instance_attributes-prereq"/><br>
> > </instance_attributes><br>
> > </op><br>
> > </operations><br>
> > </primitive><br>
> > <primitive id="fencing_route_to_ha4" class="stonith" type="meatware"><br>
> > <instance_attributes id="fencing_route_to_ha4-instance_attributes"><br>
> > <nvpair name="hostlist" value="ha4" id="fencing_route_to_ha4-instance_attributes-hostlist"/><br>
> > </instance_attributes><br>
> > <operations><br>
> > <op name="start" requires="nothing" interval="0" id="fencing_route_to_ha4-start-0"><br>
> > <instance_attributes id="fencing_route_to_ha4-start-0-instance_attributes"><br>
> > <nvpair name="prereq" value="nothing" id="fencing_route_to_ha4-start-0-instance_attributes-prereq"/><br>
> > </instance_attributes><br>
> > </op><br>
> > <op name="monitor" requires="nothing" interval="0" id="fencing_route_to_ha4-monitor-0"><br>
> > <instance_attributes id="fencing_route_to_ha4-monitor-0-instance_attributes"><br>
> > <nvpair name="prereq" value="nothing" id="fencing_route_to_ha4-monitor-0-instance_attributes-prereq"/><br>
> > </instance_attributes><br>
> > </op><br>
> > </operations><br>
> > </primitive><br>
> > </resources><br>
> > <constraints><br>
> > <rsc_location id="ha3_fabric_ping_location" rsc="ha3_fabric_ping" score="INFINITY" node="ha3"/><br>
> > <rsc_location id="ha3_fabric_ping_not_location" rsc="ha3_fabric_ping" score="-INFINITY" node="ha4"/><br>
> > <rsc_location id="ha4_fabric_ping_location" rsc="ha4_fabric_ping" score="INFINITY" node="ha4"/><br>
> > <rsc_location id="ha4_fabric_ping_not_location" rsc="ha4_fabric_ping" score="-INFINITY" node="ha3"/><br>
> > <rsc_location id="fencing_route_to_ha4_location" rsc="fencing_route_to_ha4" score="INFINITY" node="ha3"/><br>
> > <rsc_location id="fencing_route_to_ha4_not_location" rsc="fencing_route_to_ha4" score="-INFINITY" node="ha4"/><br>
> > <rsc_location id="fencing_route_to_ha3_location" rsc="fencing_route_to_ha3" score="INFINITY" node="ha4"/><br>
> > <rsc_location id="fencing_route_to_ha3_not_location" rsc="fencing_route_to_ha3" score="-INFINITY" node="ha3"/><br>
> > <rsc_order id="ha3_fabric_ping_before_fencing_route_to_ha4" score="INFINITY" first="ha3_fabric_ping" first-action="start" then="fencing_route_to_ha4" then-action="start"/><br>
> > <rsc_order id="ha4_fabric_ping_before_fencing_route_to_ha3" score="INFINITY" first="ha4_fabric_ping" first-action="start" then="fencing_route_to_ha3" then-action="start"/><br>
> > </constraints><br>
> > <rsc_defaults><br>
> > <meta_attributes id="rsc-options"><br>
> > <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/><br>
> > <nvpair name="migration-threshold" value="0" id="rsc-options-migration-threshold"/><br>
> > <nvpair name="is-managed" value="true" id="rsc-options-is-managed"/><br>
> > </meta_attributes><br>
> > </rsc_defaults><br>
> > </configuration><br>
> > <status><br>
> > <node_state id="168427534" uname="ha3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member"><br>
> > <lrm id="168427534"><br>
> > <lrm_resources><br>
> > <lrm_resource id="ha3_fabric_ping" type="ping" class="ocf" provider="pacemaker"><br>
> > <lrm_rsc_op id="ha3_fabric_ping_last_0" operation_key="ha3_fabric_ping_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="4:3:0:0ebf14dc-cfcf-425a-a507-65ed0ee060aa" transition-magic="0:0;4:3:0:0ebf14dc-cfcf-425a-a507-65ed0ee060aa" call-id="19" rc-code="0" op-status="0" interval="0" last-run="1402509661" last-rc-change="1402509661" exec-time="12" queue-time="0" op-digest="91b00b3fe95f23582466d18e42c4fd58"/><br>
> > <lrm_rsc_op id="ha3_fabric_ping_last_failure_0" operation_key="ha3_fabric_ping_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="4:1:0:0ebf14dc-cfcf-425a-a507-65ed0ee060aa" transition-magic="0:1;4:1:0:0ebf14dc-cfcf-425a-a507-65ed0ee060aa" call-id="18" rc-code="1" op-status="0" interval="0" last-run="1402509641" last-rc-change="1402509641" exec-time="20043" queue-time="0" op-digest="ddf4bee6852a62c7efcf52cf7471d629"/><br>
> > </lrm_resource><br>
> > <lrm_resource id="ha4_fabric_ping" type="ping" class="ocf" provider="pacemaker"><br>
> > <lrm_rsc_op id="ha4_fabric_ping_last_0" operation_key="ha4_fabric_ping_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="5:0:7:0ebf14dc-cfcf-425a-a507-65ed0ee060aa" transition-magic="0:7;5:0:7:0ebf14dc-cfcf-425a-a507-65ed0ee060aa" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1402509565" last-rc-change="1402509565" exec-time="10" queue-time="0" op-digest="91b00b3fe95f23582466d18e42c4fd58"/><br>
> > </lrm_resource><br>
> > <lrm_resource id="fencing_route_to_ha3" type="meatware" class="stonith"><br>
> > <lrm_rsc_op id="fencing_route_to_ha3_last_0" operation_key="fencing_route_to_ha3_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="6:0:7:0ebf14dc-cfcf-425a-a507-65ed0ee060aa" transition-magic="0:7;6:0:7:0ebf14dc-cfcf-425a-a507-65ed0ee060aa" call-id="13" rc-code="7" op-status="0" interval="0" last-run="1402509565" last-rc-change="1402509565" exec-time="1" queue-time="0" op-digest="502fbd7a2366c2be772d7fbecc9e0351"/><br>
> > </lrm_resource><br>
> > <lrm_resource id="fencing_route_to_ha4" type="meatware" class="stonith"><br>
> > <lrm_rsc_op id="fencing_route_to_ha4_last_0" operation_key="fencing_route_to_ha4_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="7:0:7:0ebf14dc-cfcf-425a-a507-65ed0ee060aa" transition-magic="0:7;7:0:7:0ebf14dc-cfcf-425a-a507-65ed0ee060aa" call-id="17" rc-code="7" op-status="0" interval="0" last-run="1402509565" last-rc-change="1402509565" exec-time="0" queue-time="0" op-digest="5be26fbcfd648e3d545d0115645dde76"/><br>
> > </lrm_resource><br>
> > </lrm_resources><br>
> > </lrm><br>
> > <transient_attributes id="168427534"><br>
> > <instance_attributes id="status-168427534"><br>
> > <nvpair id="status-168427534-shutdown" name="shutdown" value="0"/><br>
> > <nvpair id="status-168427534-probe_complete" name="probe_complete" value="true"/><br>
> > <nvpair id="status-168427534-fail-count-ha3_fabric_ping" name="fail-count-ha3_fabric_ping" value="INFINITY"/><br>
> > <nvpair id="status-168427534-last-failure-ha3_fabric_ping" name="last-failure-ha3_fabric_ping" value="1402509661"/><br>
> > </instance_attributes><br>
> > </transient_attributes><br>
> > </node_state><br>
> > <node_state id="168427535" in_ccm="false" crmd="offline" join="down" crm-debug-origin="send_stonith_update" uname="ha4" expected="down"/><br>
> > </status><br>
> > </cib><br>
> > [root@ha3 ~]# <br>
> > <br>
> > <br>
> > /var/log/messages from when pacemaker started on ha3 to when ha3_fabric_ping failed.<br>
> > Jun 11 12:59:01 ha3 systemd: Starting LSB: Starts and stops Pacemaker Cluster Manager....<br>
> > Jun 11 12:59:01 ha3 pacemaker: Starting Pacemaker Cluster Manager<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: mcp_read_config: Configured corosync to accept connections from group 1000: OK (1)<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: main: Starting Pacemaker 1.1.10 (Build: 9d39a6b): agent-manpages ncurses libqb-logging libqb-ipc lha-fencing nagios corosync-native libesmtp<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427534<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: cluster_connect_quorum: Quorum acquired<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: get_node_name: Defaulting to uname -n for the local corosync node name<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: crm_update_peer_state: pcmk_quorum_notification: Node ha3[168427534] - state is now member (was
(null))<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: corosync_node_name: Unable to get node name for nodeid 168427535<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427535<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: corosync_node_name: Unable to get node name for nodeid 168427535<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: corosync_node_name: Unable to get node name for nodeid 168427535<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427535<br>
> > Jun 11 12:59:01 ha3 pacemakerd[5007]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[168427535] - state is now member (was (null))<br>
> > Jun 11 12:59:02 ha3 pengine[5013]: warning: crm_is_writable: /var/lib/pacemaker/pengine should be owned and r/w by group haclient<br>
> > Jun 11 12:59:02 ha3 cib[5009]: warning: crm_is_writable: /var/lib/pacemaker/cib should be owned and r/w by group haclient<br>
> > Jun 11 12:59:02 ha3 cib[5009]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync<br>
> > Jun 11 12:59:02 ha3 stonith-ng[5010]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync<br>
> > Jun 11 12:59:02 ha3 crmd[5014]: notice: main: CRM Git Version: 9d39a6b<br>
> > Jun 11 12:59:02 ha3 crmd[5014]: warning: crm_is_writable: /var/lib/pacemaker/pengine should be owned and r/w by group haclient<br>
> > Jun 11 12:59:02 ha3 crmd[5014]: warning: crm_is_writable: /var/lib/pacemaker/cib should be owned and r/w by group haclient<br>
> > Jun 11 12:59:02 ha3 attrd[5012]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync<br>
> > Jun 11 12:59:02 ha3 attrd[5012]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:02 ha3 attrd[5012]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427534<br>
> > Jun 11 12:59:02 ha3 attrd[5012]: notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[168427534] - state is now member (was (null))<br>
> > Jun 11 12:59:02 ha3 attrd[5012]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:02 ha3 attrd[5012]: notice: get_node_name: Defaulting to uname -n for the local corosync node name<br>
> > Jun 11 12:59:02 ha3 stonith-ng[5010]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:02 ha3 stonith-ng[5010]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427534<br>
> > Jun 11 12:59:02 ha3 stonith-ng[5010]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:02 ha3 stonith-ng[5010]: notice: get_node_name: Defaulting to uname -n for the local corosync node name<br>
> > Jun 11 12:59:02 ha3 cib[5009]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:02 ha3 cib[5009]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427534<br>
> > Jun 11 12:59:02 ha3 cib[5009]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:02 ha3 cib[5009]: notice: get_node_name: Defaulting to uname -n for the local corosync node name<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427534<br>
> > Jun 11 12:59:03 ha3 stonith-ng[5010]: notice: setup_cib: Watching for stonith topology changes<br>
> > Jun 11 12:59:03 ha3 stonith-ng[5010]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: get_node_name: Defaulting to uname -n for the local corosync node name<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: cluster_connect_quorum: Quorum acquired<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: crm_update_peer_state: pcmk_quorum_notification: Node ha3[168427534] - state is now member (was (null))<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: corosync_node_name: Unable to get node name for nodeid 168427535<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427535<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: corosync_node_name: Unable to get node name for nodeid 168427535<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: corosync_node_name: Unable to get node name for nodeid 168427535<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: get_node_name: Could not obtain a node name for corosync nodeid 168427535<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[168427535] - state is now member (was (null))<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: get_node_name: Defaulting to uname -n for the local corosync node name<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: do_started: The local CRM is operational<br>
> > Jun 11 12:59:03 ha3 crmd[5014]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]<br>
> > Jun 11 12:59:04 ha3 stonith-ng[5010]: notice: stonith_device_register: Added 'fencing_route_to_ha4' to the device list (1 active devices)<br>
> > Jun 11 12:59:06 ha3 pacemaker: Starting Pacemaker Cluster Manager[ OK ]<br>
> > Jun 11 12:59:06 ha3 systemd: Started LSB: Starts and stops Pacemaker Cluster Manager..<br>
> > Jun 11 12:59:24 ha3 crmd[5014]: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING<br>
> > Jun 11 12:59:24 ha3 crmd[5014]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]<br>
> > Jun 11 12:59:24 ha3 crmd[5014]: warning: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION<br>
> > Jun 11 12:59:24 ha3 cib[5009]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:24 ha3 cib[5009]: notice: get_node_name: Defaulting to uname -n for the local corosync node name<br>
> > Jun 11 12:59:24 ha3 attrd[5012]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:24 ha3 attrd[5012]: notice: get_node_name: Defaulting to uname -n for the local corosync node name<br>
> > Jun 11 12:59:24 ha3 attrd[5012]: notice: write_attribute: Sent update 2 with 1 changes for terminate, id=<n/a>, set=(null)<br>
> > Jun 11 12:59:24 ha3 attrd[5012]: notice: write_attribute: Sent update 3 with 1 changes for shutdown, id=<n/a>, set=(null)<br>
> > Jun 11 12:59:24 ha3 attrd[5012]: notice: attrd_cib_callback: Update 2 for terminate[ha3]=(null): OK (0)<br>
> > Jun 11 12:59:24 ha3 attrd[5012]: notice: attrd_cib_callback: Update 3 for shutdown[ha3]=0: OK (0)<br>
> > Jun 11 12:59:25 ha3 pengine[5013]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>
> > Jun 11 12:59:25 ha3 pengine[5013]: warning: stage6: Scheduling Node ha4 for STONITH<br>
> > Jun 11 12:59:25 ha3 pengine[5013]: notice: LogActions: Start ha3_fabric_ping                 (ha3)<br>
> > Jun 11 12:59:25 ha3 pengine[5013]: notice: LogActions: Start fencing_route_to_ha4                 (ha3)<br>
> > Jun 11 12:59:25 ha3 pengine[5013]: warning: process_pe_message: Calc ulated Transition 0: /var/lib/pacemaker/pengine/pe-warn-80.bz2<br>
> > Jun 11 12:59:25 ha3 crmd[5014]: notice: te_rsc_command: Initiating action 4: monitor ha3_fabric_ping_monitor_0 on ha3 (local)<br>
> > Jun 11 12:59:25 ha3 crmd[5014]: notice: te_fence_node: Executing reboot fencing operation (12) on ha4 (timeout=60000)<br>
> > Jun 11 12:59:25 ha3 stonith-ng[5010]: notice: handle_request: Client crmd.5014.dbbbf194 wants to fence (reboot) 'ha4' with device '(any)'<br>
> > Jun 11 12:59:25 ha3 stonith-ng[5010]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for ha4: b3ab6141-9612-4024-82b2-350e74bbb33d (0)<br>
> > Jun 11 12:59:25 ha3 stonith-ng[5010]: notice: corosync_node_name: Unable to get node name for nodeid 168427534<br>
> > Jun 11 12:59:25 ha3 stonith-ng[5010]: notice: get_node_name: Defaulting to uname -n for the local corosync node name<br>
> > Jun 11 12:59:25 ha3 stonith: [5027]: info: parse config info info=ha4<br>
> > Jun 11 12:59:25 ha3 stonith-ng[5010]: notice: can_fence_host_with_device: fencing_route_to_ha4 can fence ha4: dynamic-list<br>
> > Jun 11 12:59:25 ha3 stonith: [5031]: info: parse config info info=ha4<br>
> > Jun 11 12:59:25 ha3 stonith: [5031]: CRIT: OPERATOR INTERVENTION REQUIRED to reset ha4.<br>
> > Jun 11 12:59:25 ha3 stonith: [5031]: CRIT: Run "meatclient -c ha4" AFTER power-cycling the machine.<br>
> > Jun 11 12:59:25 ha3 crmd[5014]: notice: process_lrm_event: LRM operation ha3_fabric_ping_monitor_0 (call=5, rc=7, cib-update=25, confirmed=true) not running<br>
> > Jun 11 12:59:25 ha3 crmd[5014]: notice: te_rsc_command: Initiating action 5: monitor ha4_fabric_ping_monitor_0 on ha3 (local)<br>
> > Jun 11 12:59:25 ha3 crmd[5014]: notice: process_lrm_event: LRM operation ha4_fabric_ping_monitor_0 (call=9, rc=7, cib-update=26, confirmed=true) not running<br>
> > Jun 11 12:59:25 ha3 crmd[5014]: notice: te_rsc_command: Initiating action 6: monitor fencing_route_to_ha3_monitor_0 on ha3 (local)<br>
> > Jun 11 12:59:25 ha3 crmd[5014]: notice: te_rsc_command: Initiating action 7: monitor fencing_route_to_ha4_monitor_0 on ha3 (local)<br>
> > Jun 11 12:59:25 ha3 crmd[5014]: notice: te_rsc_command: Initiating action 3: probe_complete probe_complete on ha3 (local) - no waiting<br>
> > Jun 11 12:59:25 ha3 attrd[5012]: notice: write_attribute: Sent update 4 with 1 changes for probe_complete, id=<n/a>, set=(null)<br>
> > Jun 11 12:59:25 ha3 attrd[5012]: notice: attrd_cib_callback: Update 4 for probe_complete[ha3]=true: OK (0)<br>
> > Jun 11 13:00:25 ha3 stonith-ng[5010]: notice: stonith_action_async_done: Child process 5030 performing action 'reboot' timed out with signal 15<br>
> > Jun 11 13:00:25 ha3 stonith-ng[5010]: error: log_operation: Operation 'reboot' [5030] (call 2 from crmd.5014) for host 'ha4' with device 'fencing_route_to_ha4' returned: -62 (Timer expired)<br>
> > Jun 11 13:00:25 ha3 stonith-ng[5010]: warning: log_operation: fencing_route_to_ha4:5030 [ Performing: stonith -t meatware -T reset ha4 ]<br>
> > Jun 11 13:00:25 ha3 stonith-ng[5010]: notice: stonith_choose_peer: Couldn't find anyone to fence ha4 with <any><br>
> > Jun 11 13:00:25 ha3 stonith-ng[5010]: error: remote_op_done: Operation reboot of ha4 by ha3 for crmd.5014@ha3.b3ab6141: No route to host<br>
> > Jun 11 13:00:25 ha3 crmd[5014]: notice: tengine_stonith_callback: Stonith operation 2/12:0:0:0ebf14dc-cfcf-425a-a507-65ed0ee060aa: No route to host (-113)<br>
> > Jun 11 13:00:25 ha3 crmd[5014]: notice: tengine_stonith_callback: Stonith operation 2 for ha4 failed (No route to host): aborting transition.<br>
> > Jun 11 13:00:25 ha3 crmd[5014]: notice: tengine_stonith_notify: Peer ha4 was not terminated (reboot) by ha3 for ha3: No route to host (ref=b3ab6141-9612-4024-82b2-350e74bbb33d) by client crmd.5014<br>
> > Jun 11 13:00:25 ha3 crmd[5014]: notice: run_graph: Transition 0 (Complete=7, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-80.bz2): Stopped<br>
> > Jun 11 13:00:25 ha3 pengine[5013]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>
> > Jun 11 13:00:25 ha3 pengine[5013]: warning: stage6: Scheduling Node ha4 for STONITH<br>
> > Jun 11 13:00:25 ha3 pengine[5013]: notice: LogActions: Start ha3_fabric_ping                 (ha3)<br>
> > Jun 11 13:00:25 ha3 pengine[5013]: notice: LogActions: Start fencing_route_to_ha4                 (ha3)<br>
> > Jun 11 13:00:25 ha3 pengine[5013]: warning: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-warn-81.bz2<br>
> > Jun 11 13:00:25 ha3 crmd[5014]: notice: te_fence_node: Executing reboot fencing operation (8) on ha4 (timeout=60000)<br>
> > Jun 11 13:00:25 ha3 stonith-ng[5010]: notice: handle_request: Client crmd.5014.dbbbf194 wants to fence (reboot) 'ha4' with device '(any)'<br>
> > Jun 11 13:00:25 ha3 stonith-ng[5010]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for ha4: eae78d4c-8d80-47fe-93e9-1a9261ec38a4 (0)<br>
> > Jun 11 13:00:25 ha3 stonith-ng[5010]: notice: can_fence_host_with_device: fencing_route_to_ha4 can fence ha4: dynamic-list<br>
> > Jun 11 13:00:25 ha3 stonith-ng[5010]: notice: can_fence_host_with_device: fencing_route_to_ha4 can fence ha4: dynamic-list<br>
> > Jun 11 13:00:25 ha3 stonith: [5057]: info: parse config info info=ha4<br>
> > Jun 11 13:00:25 ha3 stonith: [5057]: CRIT: OPERATOR INTERVENTION REQUIRED to reset ha4.<br>
> > Jun 11 13:00:25 ha3 stonith: [5057]: CRIT: Run "meatclient -c ha4" AFTER power-cycling the machine.<br>
> > Jun 11 13:00:41 ha3 stonith: [5057]: info: node Meatware-reset: ha4<br>
> > Jun 11 13:00:41 ha3 stonith-ng[5010]: notice: log_operation: Operation 'reboot' [5056] (call 3 from crmd.5014) for host 'ha4' with device 'fencing_route_to_ha4' returned: 0 (OK)<br>
> > Jun 11 13:00:41 ha3 stonith-ng[5010]: notice: remote_op_done: Operation reboot of ha4 by ha3 for crmd.5014@ha3.eae78d4c: OK<br>
> > Jun 11 13:00:41 ha3 crmd[5014]: notice: tengine_stonith_callback: Stonith operation 3/8:1:0:0ebf14dc-cfcf-425a-a507-65ed0ee060aa: OK (0)<br>
> > Jun 11 13:00:41 ha3 crmd[5014]: notice: crm_update_peer_state: send_stonith_update: Node ha4[0] - state is now lost (was (null))<br>
> > Jun 11 13:00:41 ha3 crmd[5014]: notice: tengine_stonith_notify: Peer ha4 was terminated (reboot) by ha3 for ha3: OK (ref=eae78d4c-8d80-47fe-93e9-1a9261ec38a4) by client crmd.5014<br>
> > Jun 11 13:00:41 ha3 crmd[5014]: notice: te_rsc_command: Initiating action 4: start ha3_fabric_ping_start_0 on ha3 (local)<br>
> > Jun 11 13:01:01 ha3 systemd: Starting Session 22 of user root.<br>
> > Jun 11 13:01:01 ha3 systemd: Started Session 22 of user root.<br>
> > Jun 11 13:01:01 ha3 attrd[5012]: notice: write_attribute: Sent update 5 with 1 changes for pingd, id=<n/a>, set=(null)<br>
> > Jun 11 13:01:01 ha3 attrd[5012]: notice: attrd_cib_callback: Update 5 for pingd[ha3]=0: OK (0)<br>
> > Jun 11 13:01:01 ha3 ping(ha3_fabric_ping)[5060]: WARNING: pingd is less than failure_score(1)<br>
> > Jun 11 13:01:01 ha3 crmd[5014]: notice: process_lrm_event: LRM operation ha3_fabric_ping_start_0 (call=18, rc=1, cib-update=37, confirmed=true) unknown error<br>
> > Jun 11 13:01:01 ha3 crmd[5014]: warning: status_from_rc: Action 4 (ha3_fabric_ping_start_0) on ha3 failed (target: 0 vs. rc: 1): Error<br>
> > Jun 11 13:01:01 ha3 crmd[5014]: warning: update_failcount: Updating failcount for ha3_fabric_ping on ha3 after failed start: rc=1 (update=INFINITY, time=1402509661)<br>
> > Jun 11 13:01:01 ha3 crmd[5014]: warning: update_failcount: Updating failcount for ha3_fabric_ping on ha3 after failed start: rc=1 (update=INFINITY, time=1402509661)<br>
> > Jun 11 13:01:01 ha3 crmd[5014]: notice: run_graph: Transition 1 (Complete=4, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-81.bz2): Stopped<br>
> > Jun 11 13:01:01 ha3 attrd[5012]: notice: write_attribute: Sent update 6 with 1 changes for fail-count-ha3_fabric_ping, id=<n/a>, set=(null)<br>
> > Jun 11 13:01:01 ha3 attrd[5012]: notice: write_attribute: Sent update 7 with 1 changes for last-failure-ha3_fabric_ping, id=<n/a>, set=(null)<br>
> > Jun 11 13:01:01 ha3 pengine[5013]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>
> > Jun 11 13:01:01 ha3 pengine[5013]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)<br>
> > Jun 11 13:01:01 ha3 pengine[5013]: notice: LogActions: Stop ha3_fabric_ping                 (ha3)<br>
> > Jun 11 13:01:01 ha3 pengine[5013]: notice: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-304.bz2<br>
> > Jun 11 13:01:01 ha3 attrd[5012]: notice: attrd_cib_callback: Update 6 for fail-count-ha3_fabric_ping[ha3]=INFINITY: OK (0)<br>
> > Jun 11 13:01:01 ha3 attrd[5012]: notice: attrd_cib_callback: Update 7 for last-failure-ha3_fabric_ping[ha3]=1402509661: OK (0)<br>
> > Jun 11 13:01:01 ha3 pengine[5013]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>
> > Jun 11 13:01:01 ha3 pengine[5013]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)<br>
> > Jun 11 13:01:01 ha3 pengine[5013]: notice: LogActions: Stop ha3_fabric_ping                 (ha3)<br>
> > Jun 11 13:01:01 ha3 pengine[5013]: notice: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-input-305.bz2<br>
> > Jun 11 13:01:01 ha3 crmd[5014]: notice: te_rsc_command: Initiating action 4: stop ha3_fabric_ping_stop_0 on ha3 (local)<br>
> > Jun 11 13:01:01 ha3 crmd[5014]: notice: process_lrm_event: LRM operation ha3_fabric_ping_stop_0 (call=19, rc=0, cib-update=41, confirmed=true) ok<br>
> > Jun 11 13:01:01 ha3 crmd[5014]: notice: run_graph: Transition 3 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-305.bz2): Complete<br>
> > Jun 11 13:01:01 ha3 crmd[5014]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
> > Jun 11 13:01:06 ha3 attrd[5012]: notice: write_attribute: Sent update 8 with 1 changes for pingd, id=<n/a>, set=(null)<br>
> > Jun 11 13:01:06 ha3 crmd[5014]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>
> > Jun 11 13:01:06 ha3 pengine[5013]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>
> > Jun 11 13:01:06 ha3 pengine[5013]: warning: unpack_rsc_op_failure: Processing failed op start for ha3_fabric_ping on ha3: unknown error (1)<br>
> > Jun 11 13:01:06 ha3 pengine[5013]: notice: process_pe_message: Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-306.bz2<br>
> > Jun 11 13:01:06 ha3 crmd[5014]: notice: run_graph: Transition 4 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-306.bz2): Complete<br>
> > Jun 11 13:01:06 ha3 crmd[5014]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
> > Jun 11 13:01:06 ha3 attrd[5012]: notice: attrd_cib_callback: Update 8 for pingd[ha3]=(null): OK (0)<br>
> > <br>
> > /etc/corosync/corosync.conf<br>
> > # Please read the corosync.conf.5 manual page<br>
> > totem {<br>
> > version: 2<br>
> > <br>
> > crypto_cipher: none<br>
> > crypto_hash: none<br>
> > <br>
> > interface {<br>
> > ringnumber: 0<br>
> > bindnetaddr: 10.10.0.0<br>
> > mcastport: 5405<br>
> > ttl: 1<br>
> > }<br>
> > transport: udpu<br>
> > }<br>
> > <br>
> > logging {<br>
> > fileline: off<br>
> > to_logfile: no<br>
> > to_syslog: yes<br>
> > #logfile: /var/log/cluster/corosync.log<br>
> > debug: off<br>
> > timestamp: on<br>
> > logger_subsys {<br>
> > subsys: QUORUM<br>
> > debug: off<br>
> > }<br>
> > }<br>
> > <br>
> > nodelist {<br>
> > node {<br>
> > ring0_addr: 10.10.0.14<br>
> > }<br>
> > <br>
> > node {<br>
> > ring0_addr: 10.10.0.15<br>
> > }<br>
> > }<br>
> > <br>
> > quorum {<br>
> > # Enable and configure quorum subsystem (default: off)<br>
> > # see also corosync.conf.5 and votequorum.5<br>
> > provider: corosync_votequorum<br>
> > expected_votes: 2<br>
> > }<br>
> > [root@ha3 ~]# <br>
> > <br>
> > Paul Cain<br>
> > <br>
> > _______________________________________________<br>
> > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br>
> > </font></tt><tt><font size="2"><a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a></font></tt><tt><font size="2"><br>
> > <br>
> > Project Home: </font></tt><tt><font size="2"><a href="http://www.clusterlabs.org">http://www.clusterlabs.org</a></font></tt><tt><font size="2"><br>
> > Getting started: </font></tt><tt><font size="2"><a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a></font></tt><tt><font size="2"><br>
> > Bugs: </font></tt><tt><font size="2"><a href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a></font></tt><tt><font size="2"><br>
> <br>
> [attachment "signature.asc" deleted by Paul E Cain/Lenexa/IBM] _______________________________________________<br>
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br>
> </font></tt><tt><font size="2"><a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a></font></tt><tt><font size="2"><br>
> <br>
> Project Home: </font></tt><tt><font size="2"><a href="http://www.clusterlabs.org">http://www.clusterlabs.org</a></font></tt><tt><font size="2"><br>
> Getting started: </font></tt><tt><font size="2"><a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a></font></tt><tt><font size="2"><br>
> Bugs: </font></tt><tt><font size="2"><a href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a></font></tt><tt><font size="2"><br>
> <br>
> _______________________________________________<br>
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br>
> </font></tt><tt><font size="2"><a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a></font></tt><tt><font size="2"><br>
> <br>
> Project Home: </font></tt><tt><font size="2"><a href="http://www.clusterlabs.org">http://www.clusterlabs.org</a></font></tt><tt><font size="2"><br>
> Getting started: </font></tt><tt><font size="2"><a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a></font></tt><tt><font size="2"><br>
> Bugs: </font></tt><tt><font size="2"><a href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a></font></tt><tt><font size="2"><br>
<br>
[attachment "signature.asc" deleted by Paul E Cain/Lenexa/IBM] _______________________________________________<br>
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br>
</font></tt><tt><font size="2"><a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a></font></tt><tt><font size="2"><br>
<br>
Project Home: </font></tt><tt><font size="2"><a href="http://www.clusterlabs.org">http://www.clusterlabs.org</a></font></tt><tt><font size="2"><br>
Getting started: </font></tt><tt><font size="2"><a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a></font></tt><tt><font size="2"><br>
Bugs: </font></tt><tt><font size="2"><a href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a></font></tt><tt><font size="2"><br>
</font></tt><br>
</body></html>