[Pacemaker] "crm_node -f -R NodeID" remove the wrong node?
Caizhifeng
bluewindow at h3c.com
Sun Apr 13 01:36:24 UTC 2014
Hi all,
I'm kinda new to Pacemaker and have some questions I hope some of you could help me with, any idea(s) would be highly appreciated, thank you.
I'am building the HA cluster with corosync2.3.3 + cluster-glue + pacemaker-1.1.11-rc5. The problem is, when I try to remove a node from HA cluster with "crm_node -f -R nodeID", it seems to remove the wrong node.
My setup is as follows(and it can be reproduced):
1. The HA is ok with 2 resource and 2 nodes, and with stonith-enabled="false":
root at h1:/opt/bin# crm status
Last updated: Sat Apr 12 16:00:51 2014
Last change: Sat Apr 12 16:00:43 2014 via cibadmin on h1
Stack: corosync
Current DC: h1 (1084752017) - partition with quorum
Version: 1.1.10-33f9d09
2 Nodes configured
2 Resources configured
Online: [ h0 h1 ]
VMdos-1 (ocf::heartbeat:VirtualDomain): Started h1
root at h1:/opt/bin#
root at h1:/opt/bin# crm_node --list
1084752017 h1
1084752016 h0
root at h1:/opt/bin#
2. remove node h0 from HA
(1). Stop pacemakerd and corosync service on node h0 (2). Run the follow cmd on node h1 to Remove node info from cib:
cibadmin --delete --obj_type status --crm_xml "<node_state id=\"1084752016\"/>"
cibadmin --delete --obj_type nodes --crm_xml "<node id=\"1084752016\"/>"
at this time; the result of "crm status" shows that node " h0 " is offline and there's no information of node h0 in CIB. But the "crm_node --list " still include node" h0"
3. crm_node -f -R 1084752016
After this cmd, node " h1" is removed, and result of "crm status " is as follow(with 0 node):
root at h1:/opt/bin# crm status
Last updated: Sat Apr 12 15:59:42 2014
Last change: Sat Apr 12 15:59:37 2014 via crm_node on h1
Stack: corosync
Current DC: NONE
0 Nodes configured
2 Resources configured
root at h1:/opt/bin
The corosync.conf is as follow:
.............
quorum {
provider: corosync_votequorum
expected_votes: 2
allow_downscale: 1
two_node: 1
}
...........
The corosync.log is as follow, it seems node " h1" is removed due to fencing!
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] ringbuffer.c:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-25004-14152-27-header
Apr 12 16:22:35 [25026] h1 cib: debug: activateCibXml: Triggering CIB write for cib_delete op
Apr 12 16:22:35 [25031] h1 crmd: notice: crm_reap_dead_member: Removing h0/1084752016 from the membership list
Apr 12 16:22:35 [25031] h1 crmd: notice: reap_crm_member: Purged 1 peers with id=1084752016 and/or uname=(null) from the membership cache
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: log_cib_diff: Config update: Local-only Change: 0.12.1
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: - <cib admin_epoch="0" epoch="11" num_updates="1">
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: - <configuration>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: - <nodes>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <node id="1084752017" uname="h1"/>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: - </nodes>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: - </configuration>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: - </cib>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: ++ <cib epoch="12" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Sat Apr 12 16:22:35 2014" update-origin="h1" update-client="crm_node" have-quorum="1" dc-uuid="1084752017"/>
Apr 12 16:22:35 [25026] h1 cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.12.1
Apr 12 16:22:35 [25026] h1 cib: notice: cib:diff: -- <node id="1084752017" uname="h1"/>
Apr 12 16:22:35 [25026] h1 cib: notice: cib:diff: ++ <cib epoch="12" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Sat Apr 12 16:22:35 2014" update-origin="h1" update-client="crm_node" have-quorum="1" dc-uuid="1084752017"/>
Apr 12 16:22:35 [25026] h1 cib: info: cib_process_request: Completed cib_delete operation for section nodes: OK (rc=0, origin=local/crm_node/2, version=0.12.1)
Apr 12 16:22:35 [25031] h1 crmd: debug: te_update_diff: Processing diff (cib_delete): 0.11.1 -> 0.12.1 (S_IDLE)
Apr 12 16:22:35 [25031] h1 crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, node=, tag=diff, id=(null), magic=NA, cib=0.12.1) : Non-status change
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.8" digest="3cccef06483ac4dfeadfb562f6f8293a">
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <diff-removed admin_epoch="0" epoch="11" num_updates="1">
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <cib admin_epoch="0" epoch="11" num_updates="1">
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <configuration>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <nodes>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <node id="1084752017" uname="h1" __crm_diff_marker__="removed:top"/>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause </nodes>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause </configuration>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause </cib>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause </diff-removed>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <diff-added>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <cib epoch="12" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Sat Apr 12 16:22:35 2014" update-origin="h1" update-client="crm_node" have-quorum="1" dc-uuid="1084752017"/>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause </diff-added>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause </diff>
Apr 12 16:22:35 [25031] h1 crmd: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr 12 16:22:35 [25031] h1 crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr 12 16:22:35 [25031] h1 crmd: debug: do_state_transition: All 1 cluster nodes are eligible to run resources.
Apr 12 16:22:35 [25031] h1 crmd: debug: do_pe_invoke: Query 38: Requesting the current CIB: S_POLICY_ENGINE
Apr 12 16:22:35 [25026] h1 cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/38, version=0.12.1)
Apr 12 16:22:35 [25031] h1 crmd: debug: do_pe_invoke_callback: Invoking the PE: query=38, ref=pe_calc-dc-1397290955-21, seq=1024, quorate=1
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: Diff: --- 0.12.1
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: Diff: +++ 0.12.2 3d673c27c3c92939b41c7207edee9f46
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: - <cib num_updates="1">
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: - <status>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <node_state id="1084752017" uname="h1" in_ccm="true" crmd="online" crm-debug-origin="post_cache_update" join="member" expected="member">
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <transient_attributes id="1084752017">
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <instance_attributes id="status-1084752017">
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <nvpair id="status-1084752017-shutdown" name="shutdown" value="0"/>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <nvpair id="status-1084752017-probe_complete" name="probe_complete" value="true"/>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- </instance_attributes>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- </transient_attributes>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <lrm id="1084752017">
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <lrm_resources>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <lrm_resource id="VMdos-1" type="VirtualDomain" class="ocf" provider="heartbeat">
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <lrm_rsc_op id="VMdos-1_last_failure_0" operation_key="VMdos-1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.8" transition-key="6:4:7:affc1dba-30eb-458d-8f86-c37d0268e52c" transition-magic="0:0;6:4:7:affc1dba-30eb-458d-8f86-c37d0268e52c" call-id="5" rc-code="0" op-status="0" interval="0" last-run="1397290607" last-rc-change="1397290607" exec-time="92" queue-time="0" op-digest="
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <lrm_rsc_op id="VMdos-1_monitor_30000" operation_key="VMdos-1_monitor_30000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.8" transition-key="7:5:0:affc1dba-30eb-458d-8f86-c37d0268e52c" transition-magic="0:0;7:5:0:affc1dba-30eb-458d-8f86-c37d0268e52c" call-id="6" rc-code="0" op-status="0" interval="30000" last-rc-change="1397290607" exec-time="67" queue-time="0" op-digest="0874c7ce5f61a12
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- </lrm_resource>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <lrm_resource id="VMdos-2" type="VirtualDomain" class="ocf" provider="heartbeat">
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- <lrm_rsc_op id="VMdos-2_last_0" operation_key="VMdos-2_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.8" transition-key="7:6:7:affc1dba-30eb-458d-8f86-c37d0268e52c" transition-magic="0:7;7:6:7:affc1dba-30eb-458d-8f86-c37d0268e52c" call-id="10" rc-code="7" op-status="0" interval="0" last-run="1397290608" last-rc-change="1397290608" exec-time="61" queue-time="0" op-digest="c7d22be
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- </lrm_resource>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- </lrm_resources>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- </lrm>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: -- </node_state>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: - </status>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: - </cib>
Apr 12 16:22:35 [25027] h1 stonith-ng: debug: Config update: ++ <cib epoch="12" num_updates="2" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Sat Apr 12 16:22:35 2014" update-origin="h1" update-client="crm_node" have-quorum="1" dc-uuid="1084752017"/>
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: STONITH timeout: 60000
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: STONITH of failed nodes is disabled
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: Stop all active resources: false
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: Default stickiness: 0
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_domains: Unpacking domains
Apr 12 16:22:35 [25030] h1 pengine: warning: unpack_status: Node h1 in status section no longer exists
Apr 12 16:22:35 [25030] h1 pengine: info: unpack_status: Node 1084752017 is unknown
Apr 12 16:22:35 [25030] h1 pengine: info: native_print: VMdos-1 (ocf::heartbeat:VirtualDomain): Stopped
Apr 12 16:22:35 [25030] h1 pengine: info: native_print: VMdos-2 (ocf::heartbeat:VirtualDomain): Stopped
Apr 12 16:22:35 [25030] h1 pengine: debug: native_assign_node: Could not allocate a node for VMdos-1
Apr 12 16:22:35 [25030] h1 pengine: info: native_color: Resource VMdos-1 cannot run anywhere
Apr 12 16:22:35 [25030] h1 pengine: debug: native_assign_node: Could not allocate a node for VMdos-2
Apr 12 16:22:35 [25030] h1 pengine: info: native_color: Resource VMdos-2 cannot run anywhere
Apr 12 16:22:35 [25030] h1 pengine: info: LogActions: Leave VMdos-1 (Stopped)
Apr 12 16:22:35 [25030] h1 pengine: info: LogActions: Leave VMdos-2 (Stopped)
Apr 12 16:22:35 [25030] h1 pengine: notice: process_pe_message: Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-4.bz2
Apr 12 16:22:35 [25031] h1 crmd: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Apr 12 16:22:35 [25031] h1 crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Apr 12 16:22:35 [25031] h1 crmd: debug: unpack_graph: Unpacked transition 4: 0 actions in 0 synapses
Apr 12 16:22:35 [25031] h1 crmd: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1397290955-21) derived from /var/lib/pacemaker/pengine/pe-input-4.bz2
Apr 12 16:22:35 [25031] h1 crmd: debug: print_graph: Empty transition graph
Apr 12 16:22:35 [25026] h1 cib: info: cib_process_request: Completed cib_delete operation for section status: OK (rc=0, origin=local/crm_node/3, version=0.12.2)
Apr 12 16:22:35 [25031] h1 crmd: debug: te_update_diff: Processing diff (cib_delete): 0.12.1 -> 0.12.2 (S_TRANSITION_ENGINE)
Apr 12 16:22:35 [25031] h1 crmd: info: abort_transition_graph: te_update_diff:188 - Triggered transition abort (complete=0, node=h1, tag=transient_attributes, id=1084752017, magic=NA, cib=0.12.2) : Transient attribute: removal
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <transient_attributes id="1084752017">
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <instance_attributes id="status-1084752017">
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <nvpair id="status-1084752017-shutdown" name="shutdown" value="0"/>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause <nvpair id="status-1084752017-probe_complete" name="probe_complete" value="true"/>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause </instance_attributes>
Apr 12 16:22:35 [25031] h1 crmd: debug: abort_transition_graph: Cause </transient_attributes>
Apr 12 16:22:35 [25031] h1 crmd: debug: update_abort_priority: Abort priority upgraded from 0 to 1000000
Apr 12 16:22:35 [25031] h1 crmd: debug: update_abort_priority: Abort action done superceeded by restart
Apr 12 16:22:35 [25031] h1 crmd: notice: run_graph: Transition 4 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-4.bz2): Complete
Apr 12 16:22:35 [25031] h1 crmd: debug: print_graph: Empty transition graph
Apr 12 16:22:35 [25031] h1 crmd: debug: te_graph_trigger: Transition 4 is now complete
Apr 12 16:22:35 [25031] h1 crmd: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Apr 12 16:22:35 [25031] h1 crmd: debug: notify_crmd: Transition 4 status: restart - Transient attribute: removal
Apr 12 16:22:35 [25031] h1 crmd: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr 12 16:22:35 [25031] h1 crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr 12 16:22:35 [25031] h1 crmd: debug: do_state_transition: All 1 cluster nodes are eligible to run resources.
Apr 12 16:22:35 [25031] h1 crmd: debug: do_pe_invoke: Query 39: Requesting the current CIB: S_POLICY_ENGINE
Apr 12 16:22:35 [25024] h1 pacemakerd: info: crm_client_new: Connecting 0x25b4ea0 for uid=0 gid=0 pid=14152 id=f3612e17-0806-4355-a3fc-2cf1feda1e6d
Apr 12 16:22:35 [25024] h1 pacemakerd: debug: handle_new_connection: IPC credentials authenticated (25024-14152-10)
Apr 12 16:22:35 [25024] h1 pacemakerd: debug: qb_ipcs_shm_connect: connecting to client [14152]
Apr 12 16:22:35 [25026] h1 cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/39, version=0.12.2)
Apr 12 16:22:35 [25024] h1 pacemakerd: debug: qb_rb_open_2: shm size:131085; real_size:135168; rb->word_size:33792
Apr 12 16:22:35 [25026] h1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (25026-14152-13)
Apr 12 16:22:35 [25026] h1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(25026-14152-13) state:2
Apr 12 16:22:35 [25026] h1 cib: info: crm_client_destroy: Destroying 0 events
Apr 12 16:22:35 [25026] h1 cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-25026-14152-13-header
Apr 12 16:22:35 [25026] h1 cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-25026-14152-13-header
Apr 12 16:22:35 [25024] h1 pacemakerd: debug: qb_rb_open_2: shm size:131085; real_size:135168; rb->word_size:33792
Apr 12 16:22:35 [25024] h1 pacemakerd: debug: qb_rb_open_2: shm size:131085; real_size:135168; rb->word_size:33792
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] ipc_setup.c:478 IPC credentials authenticated (25004-14152-27)
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] ipc_shm.c:294 connecting to client [14152]
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] ringbuffer.c:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Apr 12 16:22:35 [25031] h1 crmd: debug: do_pe_invoke_callback: Invoking the PE: query=39, ref=pe_calc-dc-1397290955-22, seq=1024, quorate=1
Apr 12 16:22:35 [25031] h1 crmd: debug: qb_ipcs_dispatch_connection_request: HUP conn (25031-14152-14)
Apr 12 16:22:35 [25031] h1 crmd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(25031-14152-14) state:2
Apr 12 16:22:35 [25031] h1 crmd: info: crm_client_destroy: Destroying 0 events
Apr 12 16:22:35 [25031] h1 crmd: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-crmd-response-25031-14152-14-header
...
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: STONITH timeout: 60000
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: STONITH of failed nodes is disabled
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: Stop all active resources: false
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: Default stickiness: 0
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 12 16:22:35 [25030] h1 pengine: debug: unpack_domains: Unpacking domains
Apr 12 16:22:35 [25030] h1 pengine: info: native_print: VMdos-1 (ocf::heartbeat:VirtualDomain): Stopped
Apr 12 16:22:35 [25030] h1 pengine: info: native_print: VMdos-2 (ocf::heartbeat:VirtualDomain): Stopped
Apr 12 16:22:35 [25030] h1 pengine: debug: native_assign_node: Could not allocate a node for VMdos-1
Apr 12 16:22:35 [25030] h1 pengine: info: native_color: Resource VMdos-1 cannot run anywhere
Apr 12 16:22:35 [25030] h1 pengine: debug: native_assign_node: Could not allocate a node for VMdos-2
Apr 12 16:22:35 [25030] h1 pengine: info: native_color: Resource VMdos-2 cannot run anywhere
Apr 12 16:22:35 [25030] h1 pengine: info: LogActions: Leave VMdos-1 (Stopped)
Apr 12 16:22:35 [25030] h1 pengine: info: LogActions: Leave VMdos-2 (Stopped)
Apr 12 16:22:35 [25030] h1 pengine: notice: process_pe_message: Calculated Transition 5: /var/lib/pacemaker/pengine/pe-input-5.bz2
Apr 12 16:22:35 [25031] h1 crmd: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Apr 12 16:22:35 [25031] h1 crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Apr 12 16:22:35 [25031] h1 crmd: debug: unpack_graph: Unpacked transition 5: 0 actions in 0 synapses
Apr 12 16:22:35 [25031] h1 crmd: info: do_te_invoke: Processing graph 5 (ref=pe_calc-dc-1397290955-22) derived from /var/lib/pacemaker/pengine/pe-input-5.bz2
Apr 12 16:22:35 [25031] h1 crmd: debug: print_graph: Empty transition graph
Apr 12 16:22:35 [25031] h1 crmd: notice: run_graph: Transition 5 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-5.bz2): Complete
Apr 12 16:22:35 [25031] h1 crmd: debug: print_graph: Empty transition graph
Apr 12 16:22:35 [25031] h1 crmd: debug: te_graph_trigger: Transition 5 is now complete
Apr 12 16:22:35 [25031] h1 crmd: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Apr 12 16:22:35 [25031] h1 crmd: debug: notify_crmd: Transition 5 status: done - <null>
Apr 12 16:22:35 [25031] h1 crmd: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr 12 16:22:35 [25031] h1 crmd: info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Apr 12 16:22:35 [25031] h1 crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr 12 16:22:35 [25031] h1 crmd: debug: do_state_transition: Starting PEngine Recheck Timer
Apr 12 16:22:35 [25031] h1 crmd: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=62
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] ringbuffer.c:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Apr 12 16:22:35 [25026] h1 cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-9.raw
Apr 12 16:22:35 [25026] h1 cib: debug: write_cib_contents: Writing CIB to disk
Apr 12 16:22:35 [25026] h1 cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-25026-14152-13-header
Apr 12 16:22:35 [25026] h1 cib: info: write_cib_contents: Wrote version 0.12.0 of the CIB to disk (digest: bd7d26226d6aa75f28b9eb670a67e944)
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] ringbuffer.c:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Apr 12 16:22:35 [25003] h1 corosync debug [MAIN ] ipc_glue.c:272 connection created
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] cmap.c:306 lib_init_fn: conn=0x7f32d0f4f4b0
Apr 12 16:22:35 [25024] h1 pacemakerd: notice: pcmk_ipc_dispatch: Instructing peers to remove references to node (null)/1084752016
Apr 12 16:22:35 [25026] h1 cib: info: crm_client_new: Connecting 0xa23770 for uid=0 gid=0 pid=14152 id=701dfa3c-d585-49d2-bd1e-f44636823e1b
Apr 12 16:22:35 [25026] h1 cib: debug: handle_new_connection: IPC credentials authenticated (25026-14152-13)
Apr 12 16:22:35 [25026] h1 cib: debug: qb_ipcs_shm_connect: connecting to client [14152]
Apr 12 16:22:35 [25026] h1 cib: debug: write_cib_contents: Wrote digest bd7d26226d6aa75f28b9eb670a67e944 to disk
Apr 12 16:22:35 [25026] h1 cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.y6wnAg (digest: /var/lib/pacemaker/cib/cib.ncXFvs)
...
Apr 12 16:22:35 [25026] h1 cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.y6wnAg
Apr 12 16:22:35 [25026] h1 cib: info: cib_process_request: Completed cib_delete operation for section nodes: OK (rc=0, origin=local/crm_node/2, version=0.12.2)
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] ipcs.c:757 HUP conn (25004-14152-27)
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] ipcs.c:605 qb_ipcs_disconnect(25004-14152-27) state:2
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] loop_poll_epoll.c:117 epoll_ctl(del): Bad file descriptor (9)
Apr 12 16:22:35 [25003] h1 corosync debug [MAIN ] ipc_glue.c:417 cs_ipcs_connection_closed()
Apr 12 16:22:35 [25003] h1 corosync debug [QB ] cmap.c:325 exit_fn for conn=0x7f32d0f4f4b0
Apr 12 16:22:35 [25003] h1 corosync debug [MAIN ] ipc_glue.c:390 cs_ipcs_connection_destroyed()
Apr 12 16:22:35 [25026] h1 cib: info: cib_process_request: Completed cib_delete operation for section status: OK (rc=0, origin=local/crm_node/3, version=0.12.2)
...
Apr 12 16:22:35 [25024] h1 pacemakerd: debug: qb_ipcs_dispatch_connection_request: HUP conn (25024-14152-10)
Apr 12 16:22:35 [25024] h1 pacemakerd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(25024-14152-10) state:2
Apr 12 16:22:35 [25024] h1 pacemakerd: info: crm_client_destroy: Destroying 0 events
...
Apr 12 16:22:35 [25026] h1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (25026-14152-13)
Apr 12 16:22:35 [25026] h1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(25026-14152-13) state:2
Apr 12 16:22:35 [25026] h1 cib: info: crm_client_destroy: Destroying 0 events
...
Apr 12 16:22:35 [25024] h1 pacemakerd: notice: crm_reap_dead_member: Removing h0/1084752016 from the membership list
Apr 12 16:22:35 [25024] h1 pacemakerd: notice: reap_crm_member: Purged 1 peers with id=1084752016 and/or uname=(null) from the membership cache
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!
More information about the Pacemaker
mailing list