[Pacemaker] remove a cluster node lead to another node reboot
jiaju liu
liujiaju86 at yahoo.com.cn
Tue Dec 7 06:55:29 UTC 2010
there is three node in my cluster oss1 oss2 oss3
I remove node oss1 as pacemaker_explained says:
On oss1:
1. Find and record the node's Corosync id: crm_node -i
id is 1678456074
2.Stop the cluster: /etc/init.d/corosync stop
On oss3
1. Tell the cluster to forget about the removed host: crm_node -R COROSYNC_ID(1678456074)
2.Only now is it safe to delete the node from the CIB with:
cibadmin --delete --obj_type nodes --crm_xml '<node uname="oss1"/>'
cibadmin --delete --obj_type status --crm_xml '<node_state uname="oss1"/>'
oss1 is removed from cluster, however oss2 is reboot by stonth , log is as follows:
Dec 07 10:41:01 corosync [pcmk ] info: pcmk_peer_update: memb: oss3 1728787722
Dec 07 10:41:01 corosync [pcmk ] info: pcmk_peer_update: memb: oss3 1728787722
Dec 07 10:41:01 corosync [pcmk ] info: pcmk_peer_update: lost: oss1 1678456074
Dec 07 10:41:01 corosync [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 2520: memb=2, new=0, lost=0
Dec 07 10:41:01 corosync [pcmk ] info: pcmk_peer_update: MEMB: oss2 1712010506
Dec 07 10:41:01 corosync [pcmk ] info: pcmk_peer_update: MEMB: oss3 1728787722
Dec 07 10:41:01 corosync [pcmk ] info: ais_mark_unseen_peer_dead: Node oss1 was not seen in the previous transition
Dec 07 10:41:01 corosync [pcmk ] info: update_member: Node 1678456074/oss1 is now: lost
Dec 07 10:41:01 corosync [pcmk ] info: send_member_notification: Sending membership update 2520 to 2 children
Dec 07 10:41:01 oss3 crmd: [6631]: info: ais_dispatch: Membership 2520: quorum retained
Dec 07 10:41:01 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 07 10:41:01 oss3 crmd: [6631]: info: crm_update_peer: Node oss1: id=1678456074 state=lost (new) addr=r(0) ip(10.53.11.100) votes=1 born=2496 seen=2516 proc=00000000000000000000000000000002
Dec 07 10:41:01 corosync [MAIN ] Completed service synchronization, ready to provide service.
Dec 07 10:41:01 oss3 crmd: [6631]: WARN: check_dead_member: Our DC node (oss1) left the cluster
Dec 07 10:41:01 oss3 crmd: [6631]: info: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=check_dead_member ]
Dec 07 10:41:01 oss3 crmd: [6631]: info: update_dc: Unset DC oss1
Dec 07 10:41:01 oss3 crmd: [6631]: info: do_election_count_vote: Election 2 (owner: oss2) pass: vote from oss2 (Age)
Dec 07 10:41:01 oss3 crmd: [6631]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Dec 07 10:41:01 oss3 crmd: [6631]: info: do_te_control: Registering TE UUID: 8e7e177a-8f4c-4e58-8d8c-1e53e1972183
Dec 07 10:41:01 oss3 crmd: [6631]: WARN: cib_client_add_notify_callback: Callback already present
Dec 07 10:41:01 oss3 crmd: [6631]: info: set_graph_functions: Setting custom graph functions
Dec 07 10:41:01 oss3 crmd: [6631]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
Dec 07 10:41:01 oss3 crmd: [6631]: info: do_dc_takeover: Taking over DC status for this partition
Dec 07 10:41:01 oss3 cib: [6627]: info: cib_process_readwrite: We are now in R/W mode
Dec 07 10:41:01 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/66, version=0.44.14): ok (rc=0)
Dec 07 10:41:01 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/67, version=0.44.14): ok (rc=0)
Dec 07 10:41:01 oss3 cib: [6627]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="44" num_updates="14" >
Dec 07 10:41:01 oss3 cib: [6627]: info: log_data_element: cib:diff: - <configuration >
Dec 07 10:41:01 oss3 cib: [6627]: info: log_data_element: cib:diff: - <crm_config >
Dec 07 10:41:01 oss3 cib: [6627]: info: log_data_element: cib:diff: - <cluster_property_set id="cib-bootstrap-options" >
Dec 07 10:41:01 oss3 cib: [6627]: info: log_data_element: cib:diff: - <nvpair value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" id="cib-bootstrap-options-dc-version" />
Dec 07 10:41:01 oss3 cib: [6627]: info: log_data_element: cib:diff: - </cluster_property_set>
Dec 07 10:41:01 oss3 cib: [6627]: info: log_data_element: cib:diff: - </crm_config>
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: - </configuration>
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: - </cib>
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="45" num_updates="1" >
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: + <configuration >
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: + <crm_config >
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: + <nvpair value="1.0.8-9881a7350d6182bae9e8e557cf20a3cc5dac3ee7" id="cib-bootstrap-options-dc-version" />
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: + </cluster_property_set>
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: + </crm_config>
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: + </configuration>
Dec 07 10:41:02 oss3 cib: [6627]: info: log_data_element: cib:diff: + </cib>
Dec 07 10:41:02 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/69, version=0.45.1): ok (rc=0)
Dec 07 10:41:02 oss3 crmd: [6631]: info: join_make_offer: Making join offers based on membership 2520
Dec 07 10:41:02 oss3 crmd: [6631]: info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
Dec 07 10:41:02 oss3 cib: [9644]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-30.raw
Dec 07 10:41:02 oss3 crmd: [6631]: info: ais_dispatch: Membership 2520: quorum retained
Dec 07 10:41:02 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/71, version=0.45.1): ok (rc=0)
Dec 07 10:41:02 oss3 cib: [9644]: info: write_cib_contents: Wrote version 0.45.0 of the CIB to disk (digest: c246873396157776474a2218c8dab13e)
Dec 07 10:41:02 oss3 crmd: [6631]: info: crm_ais_dispatch: Setting expected votes to 3
Dec 07 10:41:02 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/74, version=0.45.1): ok (rc=0)
Dec 07 10:41:02 oss3 cib: [9644]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.ruwNJZ (digest: /var/lib/heartbeat/crm/cib.1TMs5J)
Dec 07 10:41:02 oss3 crmd: [6631]: info: config_query_callback: Checking for expired actions every 900000ms
Dec 07 10:41:02 oss3 crmd: [6631]: info: config_query_callback: Sending expected-votes=3 to corosync
Dec 07 10:41:02 oss3 crmd: [6631]: info: update_dc: Set DC to oss3 (3.0.1)
Dec 07 10:41:02 oss3 crmd: [6631]: info: ais_dispatch: Membership 2520: quorum retained
Dec 07 10:41:02 oss3 crmd: [6631]: info: crm_ais_dispatch: Setting expected votes to 3
Dec 07 10:41:02 oss3 crmd: [6631]: info: te_connect_stonith: Attempting connection to fencing daemon...
Dec 07 10:41:02 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/77, version=0.45.1): ok (rc=0)
Dec 07 10:41:03 oss3 crmd: [6631]: info: te_connect_stonith: Connected
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_dc_join_finalize: join-1: Syncing the CIB from oss3 to the rest of the cluster
Dec 07 10:41:03 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/78, version=0.45.1): ok (rc=0)
Dec 07 10:41:03 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/79, version=0.45.1): ok (rc=0)
Dec 07 10:41:03 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/80, version=0.45.1): ok (rc=0)
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_dc_join_ack: join-1: Updating node state to member for oss3
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_dc_join_ack: join-1: Updating node state to member for oss2
Dec 07 10:41:03 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='oss3']/lrm (origin=local/crmd/81, version=0.45.2): ok (rc=0)
Dec 07 10:41:03 oss3 crmd: [6631]: info: erase_xpath_callback: Deletion of "//node_state[@uname='oss3']/lrm": ok (rc=0)
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Dec 07 10:41:03 oss3 crmd: [6631]: info: crm_update_quorum: Updating quorum status to true (call=87)
Dec 07 10:41:03 oss3 crmd: [6631]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=1) : Peer Cancelled
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_pe_invoke: Query 88: Requesting the current CIB: S_POLICY_ENGINE
Dec 07 10:41:03 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='oss2']/lrm (origin=local/crmd/83, version=0.45.4): ok (rc=0)
Dec 07 10:41:03 oss3 crmd: [6631]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=pingd_manage:1_monitor_0, magic=0:7;16:23:7:7d309d23-fbb2-4706-b661-26193b37b3d5, cib=0.45.4) : Resource op removal
Dec 07 10:41:03 oss3 crmd: [6631]: info: erase_xpath_callback: Deletion of "//node_state[@uname='oss2']/lrm": ok (rc=0)
Dec 07 10:41:03 oss3 crmd: [6631]: info: te_update_diff: Detected LRM refresh - 4 resources updated: Skipping all resource events
Dec 07 10:41:03 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/85, version=0.45.5): ok (rc=0)
Dec 07 10:41:03 oss3 crmd: [6631]: info: abort_transition_graph: te_update_diff:227 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.45.5) : LRM Refresh
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_pe_invoke: Query 89: Requesting the current CIB: S_POLICY_ENGINE
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_pe_invoke: Query 90: Requesting the current CIB: S_POLICY_ENGINE
Dec 07 10:41:03 oss3 crmd: [6631]: WARN: match_down_event: No match for shutdown action on oss1
Dec 07 10:41:03 oss3 crmd: [6631]: info: te_update_diff: Stonith/shutdown of oss1 not matched
Dec 07 10:41:03 oss3 crmd: [6631]: info: abort_transition_graph: te_update_diff:191 - Triggered transition abort (complete=1, tag=node_state, id=oss1, magic=NA, cib=0.45.6) : Node failure
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_pe_invoke: Query 91: Requesting the current CIB: S_POLICY_ENGINE
Dec 07 10:41:03 oss3 cib: [6627]: info: log_data_element: cib:diff: - <cib dc-uuid="oss1" admin_epoch="0" epoch="45" num_updates="6" />
Dec 07 10:41:03 oss3 crmd: [6631]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Dec 07 10:41:03 oss3 cib: [6627]: info: log_data_element: cib:diff: + <cib dc-uuid="oss3" admin_epoch="0" epoch="46" num_updates="1" />
Dec 07 10:41:03 oss3 crmd: [6631]: info: need_abort: Aborting on change to admin_epoch
Dec 07 10:41:03 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/87, version=0.46.1): ok (rc=0)
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_pe_invoke: Query 92: Requesting the current CIB: S_POLICY_ENGINE
Dec 07 10:41:03 oss3 crmd: [6631]: info: do_pe_invoke_callback: Invoking the PE: query=92, ref=pe_calc-dc-1291689663-56, seq=2520, quorate=1
Dec 07 10:41:03 oss3 pengine: [6630]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Dec 07 10:41:03 oss3 pengine: [6630]: info: determine_online_status: Node oss2 is online
Dec 07 10:41:03 oss3 pengine: [6630]: info: determine_online_status: Node oss3 is online
Dec 07 10:41:03 oss3 pengine: [6630]: info: determine_online_status_fencing: Node oss1 is down
Dec 07 10:41:03 oss3 cib: [9652]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-31.raw
Dec 07 10:41:03 oss3 pengine: [6630]: info: find_clone: Internally renamed pingd_data:0 on oss1 to pingd_data:2
Dec 07 10:41:03 oss3 attrd: [6629]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Dec 07 10:41:03 oss3 cib: [9652]: info: write_cib_contents: Wrote version 0.46.0 of the CIB to disk (digest: 8eca0f6b62912f869b1f23a62b2f3862)
Dec 07 10:41:03 oss3 pengine: [6630]: notice: native_print: ipmi_oss2 (stonith:external/ipmi): Started oss3
Dec 07 10:41:03 oss3 attrd: [6629]: info: attrd_trigger_update: Sending flush op to all hosts for: pingd_manage (1)
Dec 07 10:41:03 oss3 pengine: [6630]: notice: native_print: ipmi_oss3 (stonith:external/ipmi): Started oss2
Dec 07 10:41:03 oss3 cib: [9652]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.BMzXQ2 (digest: /var/lib/heartbeat/crm/cib.rLOMjQ)
Dec 07 10:41:03 oss3 attrd: [6629]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Dec 07 10:41:03 oss3 pengine: [6630]: notice: clone_print: Clone Set: pingd_data_net
Dec 07 10:41:03 oss3 attrd: [6629]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-ipmi_oss1 (<null>)
Dec 07 10:41:03 oss3 pengine: [6630]: notice: short_print: Started: [ oss3 oss2 ]
Dec 07 10:41:03 oss3 pengine: [6630]: notice: short_print: Stopped: [ pingd_data:2 ]
Dec 07 10:41:03 oss3 attrd: [6629]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Dec 07 10:41:03 oss3 pengine: [6630]: notice: clone_print: Clone Set: pingd_manage_net
Dec 07 10:41:03 oss3 attrd: [6629]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Dec 07 10:41:03 oss3 pengine: [6630]: notice: short_print: Started: [ oss3 oss2 ]
Dec 07 10:41:03 oss3 attrd: [6629]: info: attrd_trigger_update: Sending flush op to all hosts for: pingd_data (1)
Dec 07 10:41:03 oss3 pengine: [6630]: notice: short_print: Stopped: [ pingd_manage:2 ]
Dec 07 10:41:03 oss3 pengine: [6630]: WARN: native_color: Resource pingd_data:2 cannot run anywhere
Dec 07 10:41:03 oss3 pengine: [6630]: WARN: native_color: Resource pingd_manage:2 cannot run anywhere
Dec 07 10:41:03 oss3 pengine: [6630]: notice: LogActions: Leave resource ipmi_oss2 (Started oss3)
Dec 07 10:41:03 oss3 pengine: [6630]: notice: LogActions: Leave resource ipmi_oss3 (Started oss2)
Dec 07 10:41:03 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_data:0(Started oss3)
Dec 07 10:41:03 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_data:1(Started oss2)
Dec 07 10:41:04 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_data:2(Stopped)
Dec 07 10:41:04 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_manage:0 (Started oss3)
Dec 07 10:41:04 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_manage:1 (Started oss2)
Dec 07 10:41:04 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_manage:2 (Stopped)
Dec 07 10:41:04 oss3 crmd: [6631]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Dec 07 10:41:04 oss3 crmd: [6631]: info: unpack_graph: Unpacked transition 0: 0 actions in 0 synapses
Dec 07 10:41:04 oss3 crmd: [6631]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1291689663-56) derived from /var/lib/pengine/pe-warn-931.bz2
Dec 07 10:41:04 oss3 crmd: [6631]: info: run_graph: ====================================================
Dec 07 10:41:04 oss3 pengine: [6630]: WARN: process_pe_message: Transition 0: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-931.bz2
Dec 07 10:41:04 oss3 crmd: [6631]: notice: run_graph: Transition 0 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-931.bz2): Complete
Dec 07 10:41:04 oss3 pengine: [6630]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues.
Dec 07 10:41:04 oss3 crmd: [6631]: info: te_graph_trigger: Transition 0 is now complete
Dec 07 10:41:04 oss3 crmd: [6631]: info: notify_crmd: Transition 0 status: done - <null>
Dec 07 10:41:04 oss3 crmd: [6631]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Dec 07 10:41:04 oss3 crmd: [6631]: info: do_state_transition: Starting PEngine Recheck Timer
Dec 07 10:42:14 corosync [pcmk ] info: pcmk_remove_member: Sent: remove-peer:1678456074
Dec 07 10:42:14 corosync [pcmk ] info: destroy_ais_node: Destroying entry for node 1678456074
Dec 07 10:42:14 corosync [pcmk ] notice: ais_remove_peer: Removed dead peer 1678456074 from the membership list
Dec 07 10:42:14 corosync [pcmk ] info: ais_remove_peer: Sending removal of 1678456074 to 2 children
Dec 07 10:42:14 oss3 crmd: [6631]: info: ais_dispatch: Membership 2520: quorum retained
Dec 07 10:42:14 oss3 cib: [6627]: info: ais_dispatch: Membership 2520: quorum retained
Dec 07 10:42:14 oss3 cib: [6627]: info: ais_dispatch: Removing peer 1678456074/1678456074
Dec 07 10:42:14 oss3 cib: [6627]: notice: reap_crm_member: Removed dead peer 1678456074 from the uuid cache
Dec 07 10:42:14 oss3 cib: [6627]: notice: crm_reap_dead_member: Removing oss1/1678456074 from the membership list
Dec 07 10:42:14 oss3 cib: [6627]: notice: reap_crm_member: Removed 1 dead peers with id=1678456074 from the membership list
Dec 07 10:42:14 oss3 crmd: [6631]: info: crm_ais_dispatch: Setting expected votes to 2
Dec 07 10:42:14 oss3 crmd: [6631]: info: ais_dispatch: Removing peer 1678456074/1678456074
Dec 07 10:42:14 oss3 crmd: [6631]: notice: reap_crm_member: Removed dead peer 1678456074 from the uuid cache
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="46" num_updates="1" >
Dec 07 10:42:14 oss3 crmd: [6631]: notice: crm_reap_dead_member: Removing oss1/1678456074 from the membership list
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: - <configuration >
Dec 07 10:42:14 oss3 crmd: [6631]: notice: reap_crm_member: Removed 1 dead peers with id=1678456074 from the membership list
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: - <crm_config >
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: - <cluster_property_set id="cib-bootstrap-options" >
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: - <nvpair value="3" id="cib-bootstrap-options-expected-quorum-votes" />
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: - </cluster_property_set>
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: - </crm_config>
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: - </configuration>
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: - </cib>
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="47" num_updates="1" >
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: + <configuration >
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: + <crm_config >
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: + <nvpair value="2" id="cib-bootstrap-options-expected-quorum-votes" />
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: + </cluster_property_set>
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: + </crm_config>
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: + </configuration>
Dec 07 10:42:14 oss3 cib: [6627]: info: log_data_element: cib:diff: + </cib>
Dec 07 10:42:14 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/94, version=0.47.1): ok (rc=0)
Dec 07 10:42:14 oss3 cib: [9765]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-32.raw
Dec 07 10:42:14 oss3 cib: [9765]: info: write_cib_contents: Wrote version 0.47.0 of the CIB to disk (digest: 778635a548436cbadf3a4f203d64027d)
Dec 07 10:42:14 oss3 cib: [9765]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.sZ7QGn (digest: /var/lib/heartbeat/crm/cib.xVOzZv)
Dec 07 10:43:00 oss3 cib: [6627]: info: cib_stats: Processed 160 operations (6312.00us average, 0% utilization) in the last 10min
Dec 07 10:43:19 oss3 cib: [6627]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="47" num_updates="1" >
Dec 07 10:43:19 oss3 cib: [6627]: info: log_data_element: cib:diff: - <configuration >
Dec 07 10:43:19 oss3 cib: [6627]: info: log_data_element: cib:diff: - <nodes >
Dec 07 10:43:19 oss3 cib: [6627]: info: log_data_element: cib:diff: - <node id="oss1" uname="oss1" type="normal" __crm_diff_marker__="removed:top" />
Dec 07 10:43:19 oss3 cib: [6627]: info: log_data_element: cib:diff: - </nodes>
Dec 07 10:43:19 oss3 crmd: [6631]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Dec 07 10:43:19 oss3 cib: [6627]: info: log_data_element: cib:diff: - </configuration>
Dec 07 10:43:19 oss3 crmd: [6631]: info: need_abort: Aborting on change to admin_epoch
Dec 07 10:43:19 oss3 cib: [6627]: info: log_data_element: cib:diff: - </cib>
Dec 07 10:43:19 oss3 cib: [6627]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="48" num_updates="1" />
Dec 07 10:43:19 oss3 crmd: [6631]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Dec 07 10:43:19 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_delete for section nodes (origin=local/cibadmin/2, version=0.48.1): ok (rc=0)
Dec 07 10:43:19 oss3 crmd: [6631]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Dec 07 10:43:19 oss3 crmd: [6631]: info: do_pe_invoke: Query 95: Requesting the current CIB: S_POLICY_ENGINE
Dec 07 10:43:19 oss3 cib: [9846]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-33.raw
Dec 07 10:43:19 oss3 crmd: [6631]: info: do_pe_invoke_callback: Invoking the PE: query=95, ref=pe_calc-dc-1291689799-57, seq=2520, quorate=1
Dec 07 10:43:19 oss3 cib: [9846]: info: write_cib_contents: Wrote version 0.48.0 of the CIB to disk (digest: 492d665cd3fbd4dacbd7bb15b66e9c61)
Dec 07 10:43:19 oss3 pengine: [6630]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Dec 07 10:43:19 oss3 pengine: [6630]: info: determine_online_status: Node oss2 is online
Dec 07 10:43:19 oss3 cib: [9846]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.BE51Uu (digest: /var/lib/heartbeat/crm/cib.BXQVrK)
Dec 07 10:43:19 oss3 pengine: [6630]: info: determine_online_status: Node oss3 is online
Dec 07 10:43:19 oss3 pengine: [6630]: WARN: unpack_status: Node oss1 in status section no longer exists
Dec 07 10:43:19 oss3 pengine: [6630]: notice: native_print: ipmi_oss2 (stonith:external/ipmi): Started oss3
Dec 07 10:43:19 oss3 pengine: [6630]: notice: native_print: ipmi_oss3 (stonith:external/ipmi): Started oss2
Dec 07 10:43:19 oss3 pengine: [6630]: notice: clone_print: Clone Set: pingd_data_net
Dec 07 10:43:19 oss3 pengine: [6630]: notice: short_print: Started: [ oss3 oss2 ]
Dec 07 10:43:19 oss3 pengine: [6630]: notice: clone_print: Clone Set: pingd_manage_net
Dec 07 10:43:19 oss3 pengine: [6630]: notice: short_print: Started: [ oss3 oss2 ]
Dec 07 10:43:19 oss3 pengine: [6630]: notice: LogActions: Leave resource ipmi_oss2 (Started oss3)
Dec 07 10:43:19 oss3 pengine: [6630]: notice: LogActions: Leave resource ipmi_oss3 (Started oss2)
Dec 07 10:43:19 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_data:0(Started oss3)
Dec 07 10:43:19 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_data:1(Started oss2)
Dec 07 10:43:19 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_manage:0 (Started oss3)
Dec 07 10:43:19 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_manage:1 (Started oss2)
Dec 07 10:43:19 oss3 crmd: [6631]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Dec 07 10:43:19 oss3 crmd: [6631]: info: unpack_graph: Unpacked transition 1: 0 actions in 0 synapses
Dec 07 10:43:19 oss3 crmd: [6631]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1291689799-57) derived from /var/lib/pengine/pe-input-374.bz2
Dec 07 10:43:19 oss3 crmd: [6631]: info: run_graph: ====================================================
Dec 07 10:43:19 oss3 pengine: [6630]: info: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-374.bz2
Dec 07 10:43:19 oss3 crmd: [6631]: notice: run_graph: Transition 1 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-374.bz2): Complete
Dec 07 10:43:19 oss3 pengine: [6630]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues.
Dec 07 10:43:19 oss3 crmd: [6631]: info: te_graph_trigger: Transition 1 is now complete
Dec 07 10:43:19 oss3 crmd: [6631]: info: notify_crmd: Transition 1 status: done - <null>
Dec 07 10:43:19 oss3 crmd: [6631]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Dec 07 10:43:19 oss3 crmd: [6631]: info: do_state_transition: Starting PEngine Recheck Timer
Dec 07 10:43:55 oss3 crmd: [6631]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=1, tag=transient_attributes, id=oss2, magic=NA, cib=0.48.2) : Transient attribute: removal
Dec 07 10:43:55 oss3 crmd: [6631]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Dec 07 10:43:55 oss3 crmd: [6631]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Dec 07 10:43:55 oss3 crmd: [6631]: info: do_pe_invoke: Query 96: Requesting the current CIB: S_POLICY_ENGINE
Dec 07 10:43:55 oss3 crmd: [6631]: info: do_pe_invoke_callback: Invoking the PE: query=96, ref=pe_calc-dc-1291689835-58, seq=2520, quorate=1
Dec 07 10:43:55 oss3 pengine: [6630]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Dec 07 10:43:55 oss3 pengine: [6630]: info: determine_online_status: Node oss3 is online
Dec 07 10:43:55 oss3 pengine: [6630]: WARN: unpack_status: Node oss1 in status section no longer exists
Dec 07 10:43:55 oss3 pengine: [6630]: notice: native_print: ipmi_oss2 (stonith:external/ipmi): Started oss3
Dec 07 10:43:55 oss3 pengine: [6630]: notice: native_print: ipmi_oss3 (stonith:external/ipmi): Stopped
Dec 07 10:43:55 oss3 pengine: [6630]: notice: clone_print: Clone Set: pingd_data_net
Dec 07 10:43:55 oss3 pengine: [6630]: notice: short_print: Started: [ oss3 ]
Dec 07 10:43:55 oss3 pengine: [6630]: notice: short_print: Stopped: [ pingd_data:1 ]
Dec 07 10:43:55 oss3 pengine: [6630]: notice: clone_print: Clone Set: pingd_manage_net
Dec 07 10:43:55 oss3 pengine: [6630]: notice: short_print: Started: [ oss3 ]
Dec 07 10:43:55 oss3 pengine: [6630]: notice: short_print: Stopped: [ pingd_manage:1 ]
Dec 07 10:43:55 oss3 pengine: [6630]: WARN: native_color: Resource ipmi_oss3 cannot run anywhere
Dec 07 10:43:55 oss3 pengine: [6630]: WARN: native_color: Resource pingd_data:1 cannot run anywhere
Dec 07 10:43:55 oss3 pengine: [6630]: WARN: native_color: Resource pingd_manage:1 cannot run anywhere
Dec 07 10:43:55 oss3 pengine: [6630]: WARN: stage6: Scheduling Node oss2 for STONITH
Dec 07 10:43:55 oss3 pengine: [6630]: info: native_start_constraints: Ordering pingd_data:0_start_0 after oss2 recovery
Dec 07 10:43:55 oss3 pengine: [6630]: info: native_start_constraints: Ordering pingd_manage:0_start_0 after oss2 recovery
Dec 07 10:43:55 oss3 pengine: [6630]: notice: LogActions: Leave resource ipmi_oss2 (Started oss3)
Dec 07 10:43:55 oss3 pengine: [6630]: notice: LogActions: Leave resource ipmi_oss3 (Stopped)
Dec 07 10:43:55 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_data:0(Started oss3)
Dec 07 10:43:56 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_data:1(Stopped)
Dec 07 10:43:56 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_manage:0 (Started oss3)
Dec 07 10:43:56 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_manage:1 (Stopped)
Dec 07 10:43:56 oss3 crmd: [6631]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Dec 07 10:43:56 oss3 crmd: [6631]: info: unpack_graph: Unpacked transition 2: 4 actions in 4 synapses
Dec 07 10:43:56 oss3 crmd: [6631]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1291689835-58) derived from /var/lib/pengine/pe-warn-932.bz2
Dec 07 10:43:56 oss3 pengine: [6630]: WARN: process_pe_message: Transition 2: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-932.bz2
Dec 07 10:43:56 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 21 fired and confirmed
Dec 07 10:43:56 oss3 pengine: [6630]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues.
Dec 07 10:43:56 oss3 crmd: [6631]: info: te_fence_node: Executing reboot fencing operation (23) on oss2 (timeout=60000)
Dec 07 10:43:56 oss3 stonithd: [6626]: info: client tengine [pid: 6631] requests a STONITH operation RESET on node oss2
Dec 07 10:43:56 oss3 stonithd: [6626]: info: stonith_operate_locally::2713: sending fencing op RESET for oss2 to ipmi_oss2 (external/ipmi) (pid=9921)
Dec 07 10:43:57 oss3 stonithd: [6626]: info: Succeeded to STONITH the node oss2: optype=RESET. whodoit: oss3
Dec 07 10:43:57 oss3 crmd: [6631]: info: tengine_stonith_callback: call=9921, optype=1, node_name=oss2, result=0, node_list=oss3, action=23:2:0:8e7e177a-8f4c-4e58-8d8c-1e53e1972183
Dec 07 10:43:57 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 4 fired and confirmed
Dec 07 10:43:57 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 22 fired and confirmed
Dec 07 10:43:57 oss3 crmd: [6631]: info: run_graph: ====================================================
Dec 07 10:43:57 oss3 crmd: [6631]: notice: run_graph: Transition 2 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-932.bz2): Complete
Dec 07 10:43:57 oss3 crmd: [6631]: info: te_graph_trigger: Transition 2 is now complete
Dec 07 10:43:57 oss3 crmd: [6631]: info: notify_crmd: Transition 2 status: done - <null>
Dec 07 10:43:57 oss3 crmd: [6631]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Dec 07 10:43:57 oss3 crmd: [6631]: info: do_state_transition: Starting PEngine Recheck Timer
Dec 07 10:43:57 corosync [TOTEM ] A processor failed, forming new configuration.
Dec 07 10:43:57 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='oss2']/lrm (origin=local/crmd/98, version=0.48.3): ok (rc=0)
Dec 07 10:43:57 oss3 crmd: [6631]: info: erase_xpath_callback: Deletion of "//node_state[@uname='oss2']/lrm": ok (rc=0)
Dec 07 10:43:57 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='oss2']/transient_attributes (origin=local/crmd/99, version=0.48.3): ok (rc=0)
Dec 07 10:43:57 oss3 crmd: [6631]: info: erase_xpath_callback: Deletion of "//node_state[@uname='oss2']/transient_attributes": ok (rc=0)
Dec 07 10:43:59 oss3 crmd: [6631]: notice: ais_dispatch: Membership 2524: quorum lost
Dec 07 10:44:00 oss3 crmd: [6631]: info: ais_status_callback: status: oss2 is now lost (was member)
Dec 07 10:44:00 oss3 crmd: [6631]: info: crm_update_peer: Node oss2: id=1712010506 state=lost (new) addr=r(0) ip(10.53.11.102) votes=1 born=2516 seen=2520 proc=00000000000000000000000000013312
Dec 07 10:44:00 oss3 crmd: [6631]: info: erase_node_from_join: Removed node oss2 from join calculations: welcomed=0 itegrated=0 finalized=0 confirmed=1
Dec 07 10:44:00 oss3 crmd: [6631]: info: crm_update_quorum: Updating quorum status to false (call=102)
Dec 07 10:43:59 corosync [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 2524: memb=1, new=0, lost=1
Dec 07 10:44:00 corosync [pcmk ] info: pcmk_peer_update: memb: oss3 1728787722
Dec 07 10:44:00 corosync [pcmk ] info: pcmk_peer_update: lost: oss2 1712010506
Dec 07 10:44:00 corosync [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 2524: memb=1, new=0, lost=0
Dec 07 10:44:00 corosync [pcmk ] info: pcmk_peer_update: MEMB: oss3 1728787722
Dec 07 10:44:00 corosync [pcmk ] info: ais_mark_unseen_peer_dead: Node oss2 was not seen in the previous transition
Dec 07 10:44:00 oss3 cib: [6627]: notice: ais_dispatch: Membership 2524: quorum lost
Dec 07 10:44:00 corosync [pcmk ] info: update_member: Node 1712010506/oss2 is now: lost
Dec 07 10:44:00 oss3 cib: [6627]: info: crm_update_peer: Node oss2: id=1712010506 state=lost (new) addr=r(0) ip(10.53.11.102) votes=1 born=2516 seen=2520 proc=00000000000000000000000000013312
Dec 07 10:44:00 corosync [pcmk ] info: send_member_notification: Sending membership update 2524 to 2 children
Dec 07 10:44:00 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 07 10:44:00 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/100, version=0.48.3): ok (rc=0)
Dec 07 10:44:00 corosync [MAIN ] Completed service synchronization, ready to provide service.
Dec 07 10:44:00 oss3 cib: [6627]: info: log_data_element: cib:diff: - <cib have-quorum="1" admin_epoch="0" epoch="48" num_updates="4" />
Dec 07 10:44:00 oss3 cib: [6627]: info: log_data_element: cib:diff: + <cib have-quorum="0" admin_epoch="0" epoch="49" num_updates="1" />
Dec 07 10:44:00 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/102, version=0.49.1): ok (rc=0)
Dec 07 10:44:00 oss3 crmd: [6631]: info: crm_ais_dispatch: Setting expected votes to 2
Dec 07 10:44:00 oss3 cib: [9942]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-34.raw
Dec 07 10:44:00 oss3 crmd: [6631]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Dec 07 10:44:00 oss3 cib: [6627]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/104, version=0.49.1): ok (rc=0)
Dec 07 10:44:00 oss3 cib: [9942]: info: write_cib_contents: Wrote version 0.49.0 of the CIB to disk (digest: 9df6fe36fe77af2a282c1f1b23561461)
Dec 07 10:44:00 oss3 crmd: [6631]: info: need_abort: Aborting on change to have-quorum
Dec 07 10:44:00 oss3 cib: [9942]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.dty6BQ (digest: /var/lib/heartbeat/crm/cib.p4H4Pr)
Dec 07 10:44:00 oss3 crmd: [6631]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Dec 07 10:44:00 oss3 crmd: [6631]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Dec 07 10:44:00 oss3 crmd: [6631]: info: do_pe_invoke: Query 105: Requesting the current CIB: S_POLICY_ENGINE
Dec 07 10:44:00 oss3 crmd: [6631]: info: do_pe_invoke_callback: Invoking the PE: query=105, ref=pe_calc-dc-1291689840-60, seq=2524, quorate=0
Dec 07 10:44:00 oss3 pengine: [6630]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Dec 07 10:44:00 oss3 pengine: [6630]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled
Dec 07 10:44:00 oss3 pengine: [6630]: info: determine_online_status: Node oss3 is online
Dec 07 10:44:00 oss3 pengine: [6630]: WARN: unpack_status: Node oss1 in status section no longer exists
Dec 07 10:44:00 oss3 pengine: [6630]: info: determine_online_status_fencing: Node oss2 is down
Dec 07 10:44:00 oss3 pengine: [6630]: notice: native_print: ipmi_oss2 (stonith:external/ipmi): Started oss3
Dec 07 10:44:00 oss3 pengine: [6630]: notice: native_print: ipmi_oss3 (stonith:external/ipmi): Stopped
Dec 07 10:44:00 oss3 pengine: [6630]: notice: clone_print: Clone Set: pingd_data_net
Dec 07 10:44:00 oss3 pengine: [6630]: notice: short_print: Started: [ oss3 ]
Dec 07 10:44:00 oss3 pengine: [6630]: notice: short_print: Stopped: [ pingd_data:1 ]
Dec 07 10:44:00 oss3 pengine: [6630]: notice: clone_print: Clone Set: pingd_manage_net
Dec 07 10:44:00 oss3 pengine: [6630]: notice: short_print: Started: [ oss3 ]
Dec 07 10:44:00 oss3 pengine: [6630]: notice: short_print: Stopped: [ pingd_manage:1 ]
Dec 07 10:44:00 oss3 pengine: [6630]: WARN: native_color: Resource ipmi_oss3 cannot run anywhere
Dec 07 10:44:00 oss3 pengine: [6630]: WARN: native_color: Resource pingd_data:1 cannot run anywhere
Dec 07 10:44:00 oss3 pengine: [6630]: WARN: native_color: Resource pingd_manage:1 cannot run anywhere
Dec 07 10:44:00 oss3 pengine: [6630]: notice: LogActions: Leave resource ipmi_oss2 (Started oss3)
Dec 07 10:44:00 oss3 pengine: [6630]: notice: LogActions: Leave resource ipmi_oss3 (Stopped)
Dec 07 10:44:00 oss3 pengine: [6630]: notice: LogActions: Stop resource pingd_data:0 (oss3)
Dec 07 10:44:00 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_data:1(Stopped)
Dec 07 10:44:00 oss3 pengine: [6630]: notice: LogActions: Stop resource pingd_manage:(oss3)
Dec 07 10:44:00 oss3 pengine: [6630]: notice: LogActions: Leave resource pingd_manage:1 (Stopped)
Dec 07 10:44:00 oss3 crmd: [6631]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Dec 07 10:44:00 oss3 crmd: [6631]: info: unpack_graph: Unpacked transition 3: 11 actions in 11 synapses
Dec 07 10:44:00 oss3 crmd: [6631]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1291689840-60) derived from /var/lib/pengine/pe-warn-933.bz2
Dec 07 10:44:00 oss3 pengine: [6630]: WARN: process_pe_message: Transition 3: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-933.bz2
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 13 fired and confirmed
Dec 07 10:44:00 oss3 pengine: [6630]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues.
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 19 fired and confirmed
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_rsc_command: Initiating action 9: stop pingd_data:0_stop_0 on oss3 (local)
Dec 07 10:44:00 oss3 lrmd: [6628]: info: cancel_op: operation monitor[8] on ocf::ping::pingd_data:0 for client 6631, its parameters: CRM_meta_interval=[200000] CRM_meta_timeout=[200000] name=[pingd_data] CRM_meta_clone_max=[3] crm_feature_set=[3.0.1] host_list=[ 192.168.10.104] CRM_meta_globally_unique=[false] CRM_meta_name=[monitor] CRM_meta_clone=[0] CRM_meta_clone_node_max=[1] CRM_meta_notify=[false] cancelled
Dec 07 10:44:00 oss3 crmd: [6631]: info: do_lrm_rsc_op: Performing key=9:3:0:8e7e177a-8f4c-4e58-8d8c-1e53e1972183 op=pingd_data:0_stop_0 )
Dec 07 10:44:00 oss3 lrmd: [6628]: info: rsc:pingd_data:0:19: stop
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_rsc_command: Initiating action 15: stop pingd_manage:0_stop_0 on oss3 (local)
Dec 07 10:44:00 oss3 lrmd: [6628]: info: cancel_op: operation monitor[12] on ocf::ping::pingd_manage:0 for client 6631, its parameters: CRM_meta_interval=[200000] CRM_meta_timeout=[200000] name=[pingd_manage] CRM_meta_clone_max=[3] crm_feature_set=[3.0.1] host_list=[ 10.53.11.104] CRM_meta_globally_unique=[false] CRM_meta_name=[monitor] CRM_meta_clone=[0] CRM_meta_clone_node_max=[1] CRM_meta_notify=[false] cancelled
Dec 07 10:44:00 oss3 crmd: [6631]: info: do_lrm_rsc_op: Performing key=15:3:0:8e7e177a-8f4c-4e58-8d8c-1e53e1972183 op=pingd_manage:0_stop_0 )
Dec 07 10:44:00 oss3 lrmd: [6628]: info: rsc:pingd_manage:0:20: stop
Dec 07 10:44:00 oss3 crmd: [6631]: info: process_lrm_event: LRM operation pingd_data:0_monitor_200000 (call=8, status=1, cib-update=0, confirmed=true) Cancelled
Dec 07 10:44:00 oss3 crmd: [6631]: info: process_lrm_event: LRM operation pingd_manage:0_monitor_200000 (call=12, status=1, cib-update=0, confirmed=true) Cancelled
Dec 07 10:44:00 oss3 crmd: [6631]: info: process_lrm_event: LRM operation pingd_data:0_stop_0 (call=19, rc=0, cib-update=106, confirmed=true) ok
Dec 07 10:44:00 oss3 crmd: [6631]: info: process_lrm_event: LRM operation pingd_manage:0_stop_0 (call=20, rc=0, cib-update=107, confirmed=true) ok
Dec 07 10:44:00 oss3 crmd: [6631]: info: match_graph_event: Action pingd_data:0_stop_0 (9) confirmed on oss3 (rc=0)
Dec 07 10:44:00 oss3 crmd: [6631]: info: match_graph_event: Action pingd_manage:0_stop_0 (15) confirmed on oss3 (rc=0)
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 20 fired and confirmed
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 4 fired and confirmed
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 11 fired and confirmed
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 12 fired and confirmed
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 17 fired and confirmed
Dec 07 10:44:00 oss3 crmd: [6631]: info: te_pseudo_action: Pseudo action 18 fired and confirmed
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20101207/272ecedd/attachment-0001.html>
More information about the Pacemaker
mailing list