Nov 13 13:44:17 [31444] vm2 corosync notice  [MAIN  ] main.c:main:1171 Corosync Cluster Engine ('2.3.2.7-a911'): started and ready to provide service.
Nov 13 13:44:17 [31444] vm2 corosync info    [MAIN  ] main.c:main:1172 Corosync built-in features: watchdog upstart snmp pie relro bindnow
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:901 Token Timeout (1000 ms) retransmit timeout (238 ms)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:904 token hold (180 ms) retransmits before loss (4 retrans)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:911 join (50 ms) send_join (0 ms) consensus (1200 ms) merge (200 ms)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:914 downcheck (1000 ms) fail to recv const (2500 msgs)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:916 seqno unchanged const (30 rotations) Maximum network MTU 1401
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:920 window size per rotation (50 messages) maximum messages per rotation (17 messages)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:924 missed count const (5 messages)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:927 send threads (0 threads)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:930 RRP token expired timeout (238 ms)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:933 RRP token problem counter (10000 ms)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:936 RRP threshold (10 problem count)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:939 RRP multicast threshold (100 problem count)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:942 RRP automatic recovery check timeout (1000 ms)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:944 RRP mode set to active.
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:947 heartbeat_failures_allowed (0)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:949 max_network_delay (50 ms)
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:972 HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Nov 13 13:44:17 [31444] vm2 corosync notice  [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Nov 13 13:44:17 [31444] vm2 corosync notice  [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Nov 13 13:44:17 [31444] vm2 corosync notice  [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Nov 13 13:44:17 [31444] vm2 corosync notice  [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:905 Receive multicast socket recv buffer size (320000 bytes).
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:911 Transmit multicast socket send buffer size (320000 bytes).
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:917 Local receive multicast loop socket recv buffer size (320000 bytes).
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:923 Local transmit multicast loop socket send buffer size (320000 bytes).
Nov 13 13:44:17 [31444] vm2 corosync notice  [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.101.142] is now up.
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:main_iface_change_fn:4637 Created or loaded sequence id 0.192.168.101.142 for this ring.
Nov 13 13:44:17 [31444] vm2 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration map access [0]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cmap [0]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:17 [31444] vm2 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cmap
Nov 13 13:44:17 [31444] vm2 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration service [1]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cfg [1]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:17 [31444] vm2 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cfg
Nov 13 13:44:17 [31444] vm2 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster closed process group service v1.01 [2]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cpg [2]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:17 [31444] vm2 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cpg
Nov 13 13:44:17 [31444] vm2 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync profile loading service [4]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:851 NOT Initializing IPC on pload [4]
Nov 13 13:44:17 [31444] vm2 corosync warning [WD    ] wd.c:setup_watchdog:631 No Watchdog, try modprobe <a watchdog>
Nov 13 13:44:17 [31444] vm2 corosync info    [WD    ] wd.c:wd_scan_resources:580 no resources configured.
Nov 13 13:44:17 [31444] vm2 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync watchdog service [7]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:851 NOT Initializing IPC on wd [7]
Nov 13 13:44:17 [31444] vm2 corosync notice  [QUORUM] vsf_quorum.c:quorum_exec_init_fn:274 Using quorum provider corosync_votequorum
Nov 13 13:44:17 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:votequorum_readconfig:967 Reading configuration (runtime: 0)
Nov 13 13:44:17 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:votequorum_read_nodelist_configuration:886 No nodelist defined or our node is not in the nodelist
Nov 13 13:44:17 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:17 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:17 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:17 [31444] vm2 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync vote quorum service v1.0 [5]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on votequorum [5]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:17 [31444] vm2 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: votequorum
Nov 13 13:44:17 [31444] vm2 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster quorum service v0.1 [3]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on quorum [3]
Nov 13 13:44:17 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:17 [31444] vm2 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: quorum
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:905 Receive multicast socket recv buffer size (320000 bytes).
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:911 Transmit multicast socket send buffer size (320000 bytes).
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:917 Local receive multicast loop socket recv buffer size (320000 bytes).
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:923 Local transmit multicast loop socket send buffer size (320000 bytes).
Nov 13 13:44:17 [31444] vm2 corosync notice  [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.102.142] is now up.
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2087 entering GATHER state from 15(interface change).
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_token_create:3138 Creating commit token because I am the rep.
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_save:1550 Saving state aru 0 high seq received 0
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3383 Storing new sequence id for ring 4
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2135 entering COMMIT state.
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2172 entering RECOVERY state.
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [0] member 192.168.101.142:
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 0 rep 192.168.101.142
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru 0 high delivered 0 received flag 1
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2326 Did not need to originate any messages in recovery.
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Nov 13 13:44:17 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3829 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_reset:1566 Resetting old ring state
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:deliver_messages_from_recovery_to_regular:1772 recovery to regular 1-0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:18 [31444] vm2 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) 
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_operational_enter:2010 entering OPERATIONAL state.
Nov 13 13:44:18 [31444] vm2 corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.142:4) was formed. Members joined: -1062705778
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261518]: votes: 1, expected: 3 flags: 8
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync configuration map access
Nov 13 13:44:18 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_sync_activate:386 Single node sync -> no action
Nov 13 13:44:18 [31444] vm2 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ; members(old:0 left:0)
Nov 13 13:44:18 [31444] vm2 corosync debug   [CPG   ] cpg.c:downlist_log:776 chosen downlist: sender r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ; members(old:0 left:0)
Nov 13 13:44:18 [31444] vm2 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync cluster closed process group service v1.01
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261518]: votes: 1, expected: 3 flags: 8
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync vote quorum service v1.0
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync notice  [QUORUM] vsf_quorum.c:log_view_list:132 Members[1]: -1062705778
Nov 13 13:44:18 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to (nil), length = 52
Nov 13 13:44:18 [31444] vm2 corosync notice  [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2087 entering GATHER state from 9(merge during operational state).
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_save:1550 Saving state aru 6 high seq received 6
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3383 Storing new sequence id for ring 8
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2135 entering COMMIT state.
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2172 entering RECOVERY state.
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2214 TRANS [0] member 192.168.101.142:
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [0] member 192.168.101.141:
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 4 rep 192.168.101.141
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru 6 high delivered 6 received flag 1
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [1] member 192.168.101.142:
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 4 rep 192.168.101.142
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru 6 high delivered 6 received flag 1
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2326 Did not need to originate any messages in recovery.
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru ffffffff
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3829 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_reset:1566 Resetting old ring state
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:deliver_messages_from_recovery_to_regular:1772 recovery to regular 1-0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:18 [31444] vm2 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) 
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_operational_enter:2010 entering OPERATIONAL state.
Nov 13 13:44:18 [31444] vm2 corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.141:8) was formed. Members joined: -1062705779
Nov 13 13:44:18 [31444] vm2 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync configuration map access
Nov 13 13:44:18 [31444] vm2 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:1 left:0)
Nov 13 13:44:18 [31444] vm2 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ; members(old:1 left:0)
Nov 13 13:44:18 [31444] vm2 corosync debug   [CPG   ] cpg.c:downlist_log:776 chosen downlist: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:1 left:0)
Nov 13 13:44:18 [31444] vm2 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync cluster closed process group service v1.01
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261517]: votes: 1, expected: 3 flags: 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=2, expected_votes=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705778
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:are_we_quorate:777 quorum regained, resuming activity
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261518]: votes: 1, expected: 3 flags: 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=2, expected_votes=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705778
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync vote quorum service v1.0
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=2, expected_votes=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705778
Nov 13 13:44:18 [31444] vm2 corosync notice  [QUORUM] vsf_quorum.c:quorum_api_set_quorum:148 This node is within the primary component and will provide service.
Nov 13 13:44:18 [31444] vm2 corosync notice  [QUORUM] vsf_quorum.c:log_view_list:132 Members[2]: -1062705779 -1062705778
Nov 13 13:44:18 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to (nil), length = 56
Nov 13 13:44:18 [31444] vm2 corosync notice  [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31451-25)
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31451]
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:18 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31451-25)
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31451-25) state:2
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:18 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:18 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-response-31445-31451-25-header
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-event-31445-31451-25-header
Nov 13 13:44:18 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-request-31445-31451-25-header
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2087 entering GATHER state from 11(merge during join).
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_save:1550 Saving state aru a high seq received a
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3383 Storing new sequence id for ring c
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2135 entering COMMIT state.
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2172 entering RECOVERY state.
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2214 TRANS [0] member 192.168.101.141:
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2214 TRANS [1] member 192.168.101.142:
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [0] member 192.168.101.141:
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 8 rep 192.168.101.141
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru a high delivered a received flag 1
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [1] member 192.168.101.142:
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 8 rep 192.168.101.141
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru a high delivered a received flag 1
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [2] member 192.168.101.143:
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 4 rep 192.168.101.143
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru 6 high delivered 6 received flag 1
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2326 Did not need to originate any messages in recovery.
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru ffffffff
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3829 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_reset:1566 Resetting old ring state
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:deliver_messages_from_recovery_to_regular:1772 recovery to regular 1-0
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:18 [31444] vm2 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) 
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totemsrp.c:memb_state_operational_enter:2010 entering OPERATIONAL state.
Nov 13 13:44:18 [31444] vm2 corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.141:12) was formed. Members joined: -1062705777
Nov 13 13:44:18 [31444] vm2 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync configuration map access
Nov 13 13:44:18 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_sync_activate:394 Not first sync -> no action
Nov 13 13:44:18 [31444] vm2 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ; members(old:2 left:0)
Nov 13 13:44:18 [31444] vm2 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:2 left:0)
Nov 13 13:44:18 [31444] vm2 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ; members(old:1 left:0)
Nov 13 13:44:18 [31444] vm2 corosync debug   [CPG   ] cpg.c:downlist_log:776 chosen downlist: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:2 left:0)
Nov 13 13:44:18 [31444] vm2 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync cluster closed process group service v1.01
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261518]: votes: 1, expected: 3 flags: 1
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=2, expected_votes=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705778
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261519]: votes: 1, expected: 3 flags: 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705778
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261517]: votes: 1, expected: 3 flags: 1
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705778
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [31444] vm2 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync vote quorum service v1.0
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [31444] vm2 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705778
Nov 13 13:44:18 [31444] vm2 corosync notice  [QUORUM] vsf_quorum.c:log_view_list:132 Members[3]: -1062705779 -1062705778 -1062705777
Nov 13 13:44:18 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to (nil), length = 60
Nov 13 13:44:18 [31444] vm2 corosync notice  [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Nov 13 13:44:18 [31444] vm2 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 0
Nov 13 13:44:20 [31455] vm2 pacemakerd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:896   )   debug: main: 	Checking for old instances of pacemakerd
Nov 13 13:44:20 [31455] vm2 pacemakerd: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish pacemakerd connection: Connection refused (111)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31455-25)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31455]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (   cluster.c:526   )   debug: get_cluster_type: 	Testing with Corosync
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148303a50
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31455-26)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31455]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-26-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-26-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-26-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (   cluster.c:573   )    info: get_cluster_type: 	Detected an active 'corosync' cluster
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:326   )    info: mcp_read_config: 	Reading configure for stack: corosync
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148305490
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15874
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31455-26)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31455-26) state:2
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148305490
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-26-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-26-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-26-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:426   )  notice: mcp_read_config: 	Configured corosync to accept connections from group 492: OK (1)
Nov 13 13:44:20 [31455] vm2 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-25-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-25-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-25-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (   logging.c:314   )  notice: crm_add_logfile: 	Additional logging available in /var/log/ha-debug
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:931   )  notice: main: 	Starting Pacemaker 1.1.10 (Build: 2383f6c):  ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:931   )  notice: main: 	Starting Pacemaker 1.1.10 (Build: 2383f6c):  ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:941   )    info: main: 	Maximum core file size is: 18446744073709551615
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:941   )    info: main: 	Maximum core file size is: 18446744073709551615
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: pacemakerd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: pacemakerd
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31455-25)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31455-25) state:2
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148303a50
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-25-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-25-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-25-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31455-25)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31455]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:142   )   debug: cluster_connect_cfg: 	Our nodeid: -1062705778
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:142   )   debug: cluster_connect_cfg: 	Our nodeid: -1062705778
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31455-26)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31455]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7f5148304830, cpd=0x7f5148304f84
Nov 13 13:44:20 [31455] vm2 pacemakerd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261518
Nov 13 13:44:20 [31455] vm2 pacemakerd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261518
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 3233eac7-ee3c-4168-b5a5-d00bb986e0ef/0x1d8e030 for node (null)/3232261518 (1 total)
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 3233eac7-ee3c-4168-b5a5-d00bb986e0ef/0x1d8e030 for node (null)/3232261518 (1 total)
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:255   )   debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:255   )   debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31455
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31455-27)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31455]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:quorum_lib_init_fn:316 lib_init_fn: conn=0x7f514850ce30
Nov 13 13:44:20 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_gettype:471 got quorum_type request on 0x7f514850ce30
Nov 13 13:44:20 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_getquorate:395 got quorate request on 0x7f514850ce30
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:273   )  notice: cluster_connect_quorum: 	Quorum acquired
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:273   )  notice: cluster_connect_quorum: 	Quorum acquired
Nov 13 13:44:20 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:412 got trackstart request on 0x7f514850ce30
Nov 13 13:44:20 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:420 sending initial status to 0x7f514850ce30
Nov 13 13:44:20 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to 0x7f514850ce30, length = 60
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31455-28)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31455]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f514850e8d0
Nov 13 13:44:20 [31455] vm2 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31455] vm2 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-28-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31455-28)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31455-28) state:2
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f514850e8d0
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-28-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-28-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-28-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31455-28)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31455]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f514850e8d0
Nov 13 13:44:20 [31455] vm2 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31455] vm2 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:20 [31455] vm2 pacemakerd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [31455] vm2 pacemakerd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31455-28)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31455-28) state:2
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f514850e8d0
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-28-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-28-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process cib
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process cib
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31459 for process cib
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31459 for process cib
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000004000000)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000004000000)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31460 for process stonith-ng
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31460 for process stonith-ng
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31461 for process lrmd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31461 for process lrmd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process attrd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process attrd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31462 for process attrd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31462 for process attrd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process pengine
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process pengine
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31463 for process pengine
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31463 for process pengine
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process crmd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process crmd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31464 for process crmd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 31464 for process crmd
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:1023  )    info: main: 	Starting mainloop
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:1023  )    info: main: 	Starting mainloop
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:191   )    info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:191   )    info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 1e2d1f4f-c6fd-4255-8cbe-137571cb45e8/0x1e903d0 for node (null)/3232261517 (2 total)
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 1e2d1f4f-c6fd-4255-8cbe-137571cb45e8/0x1e903d0 for node (null)/3232261517 (2 total)
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261517
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261517
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31455-28)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31455]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31461] vm2       lrmd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 [31462] vm2      attrd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 [31462] vm2      attrd: (      main.c:307   )    info: main: 	Starting up
Nov 13 13:44:20 [31459] vm2        cib: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 [31459] vm2        cib: (      main.c:230   )  notice: main: 	Using new config location: /var/lib/pacemaker/cib
Nov 13 13:44:20 [31459] vm2        cib: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [31461] vm2       lrmd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: lrmd
Nov 13 13:44:20 [31461] vm2       lrmd: (      main.c:313   )    info: main: 	Starting
Nov 13 13:44:20 [31462] vm2      attrd: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [31462] vm2      attrd: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [31462] vm2      attrd: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:20 [31459] vm2        cib: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [31459] vm2        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
Nov 13 13:44:20 [31459] vm2        cib: (        io.c:262   ) warning: retrieveCib: 	Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml
Nov 13 13:44:20 [31459] vm2        cib: (        io.c:380   ) warning: readCibXmlFile: 	Primary configuration corrupt or unusable, trying backups in /var/lib/pacemaker/cib
Nov 13 13:44:20 [31459] vm2        cib: (        io.c:412   ) warning: readCibXmlFile: 	Continuing with an empty configuration.
Nov 13 13:44:20 [31460] vm2 stonith-ng: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 [31460] vm2 stonith-ng: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [31460] vm2 stonith-ng: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [31460] vm2 stonith-ng: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31459] vm2        cib: (       xml.c:2627  )    info: validate_with_relaxng: 	Creating RNG parser context
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f514850e8d0
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31463] vm2    pengine: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 [31463] vm2    pengine: (      main.c:168   )   debug: main: 	Init server comms
Nov 13 13:44:20 [31463] vm2    pengine: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: pengine
Nov 13 13:44:20 [31463] vm2    pengine: (      main.c:176   )    info: main: 	Starting pengine
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31462-29)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31462]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31464] vm2       crmd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 [31464] vm2       crmd: (      main.c:97    )  notice: main: 	CRM Git Version: 2383f6c
Nov 13 13:44:20 [31464] vm2       crmd: (      main.c:134   )   debug: crmd_init: 	Starting crmd
Nov 13 13:44:20 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
Nov 13 13:44:20 [31464] vm2       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Nov 13 13:44:20 [31464] vm2       crmd: (   control.c:488   )   debug: do_startup: 	Registering Signal Handlers
Nov 13 13:44:20 [31464] vm2       crmd: (   control.c:495   )   debug: do_startup: 	Creating CIB and LRM objects
Nov 13 13:44:20 [31464] vm2       crmd: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [31464] vm2       crmd: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [31464] vm2       crmd: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish cib_shm connection: Connection refused (111)
Nov 13 13:44:20 [31464] vm2       crmd: (cib_native.c:229   )   debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Nov 13 13:44:20 [31464] vm2       crmd: (cib_native.c:272   )   debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Nov 13 13:44:20 [31464] vm2       crmd: (cib_native.c:282   )   debug: cib_native_signoff: 	Signing out of the CIB Service
Nov 13 13:44:20 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for start op
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31459] vm2        cib: (      main.c:586   )    info: startCib: 	CIB Initialization completed successfully
Nov 13 13:44:20 [31459] vm2        cib: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7f51486108b0, cpd=0x7f51486115f4
Nov 13 13:44:20 [31455] vm2 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31455] vm2 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-28-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm2[3232261518] - state is now member (was (null))
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm2[3232261518] - state is now member (was (null))
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 191e0eaf-2935-49e5-8ffe-b876b3112438/0x1e8f7d0 for node (null)/3232261519 (3 total)
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 191e0eaf-2935-49e5-8ffe-b876b3112438/0x1e8f7d0 for node (null)/3232261519 (3 total)
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261519
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261519
Nov 13 13:44:20 [31462] vm2      attrd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261518
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31460-30)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31460]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7f514871b690, cpd=0x7f5148612b74
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31455-28)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31455-28) state:2
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f514850e8d0
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-28-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-28-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-28-header
Nov 13 13:44:20 [31462] vm2      attrd: (membership.c:399   )    info: crm_get_peer: 	Created entry 82a2ec4c-d6d9-47d4-9c8f-01d206912a35/0x871130 for node (null)/3232261518 (1 total)
Nov 13 13:44:20 [31462] vm2      attrd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:20 [31462] vm2      attrd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:20 [31462] vm2      attrd: (membership.c:607   )  notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:20 [31462] vm2      attrd: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:20 [31460] vm2 stonith-ng: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261518
Nov 13 13:44:20 [31460] vm2 stonith-ng: (membership.c:399   )    info: crm_get_peer: 	Created entry 45e8e5d2-7778-4e42-95b0-f4f07035d0c8/0xfe96a0 for node (null)/3232261518 (1 total)
Nov 13 13:44:20 [31460] vm2 stonith-ng: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:20 [31460] vm2 stonith-ng: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:20 [31460] vm2 stonith-ng: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 460
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15881
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15879
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15878
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31462
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31460
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31459-28)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31459]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7f5148717e20, cpd=0x7f514850ed44
Nov 13 13:44:20 [31459] vm2        cib: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261518
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31455-31)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31455]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f51486131b0
Nov 13 13:44:20 [31459] vm2        cib: (membership.c:399   )    info: crm_get_peer: 	Created entry ab16e495-b8ed-4471-b158-1e5d2ae60c1a/0xbc9360 for node (null)/3232261518 (1 total)
Nov 13 13:44:20 [31459] vm2        cib: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:20 [31459] vm2        cib: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:20 [31459] vm2        cib: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:20 [31455] vm2 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31455] vm2 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-31-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-31-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-31-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-31-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-31-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-31-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:20 [31455] vm2 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000000000000)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000000000000)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31455-31)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31455-31) state:2
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f51486131b0
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31455-31-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31455-31-header
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:20 [31455] vm2 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000000000)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000000000)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31455-31-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31462-31)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31462]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148614d00
Nov 13 13:44:20 [31462] vm2      attrd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31462] vm2      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31462-31-header
Nov 13 13:44:20 [31462] vm2      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31462-31-header
Nov 13 13:44:20 [31462] vm2      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31462-31-header
Nov 13 13:44:20 [31462] vm2      attrd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:20 [31462] vm2      attrd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [31462] vm2      attrd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:20 [31462] vm2      attrd: (      main.c:323   )    info: main: 	Cluster connection active
Nov 13 13:44:20 [31462] vm2      attrd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: attrd
Nov 13 13:44:20 [31462] vm2      attrd: (      main.c:327   )    info: main: 	Accepting attribute updates
Nov 13 13:44:20 [31462] vm2      attrd: (      main.c:149   )   debug: attrd_cib_connect: 	CIB signon attempt 1
Nov 13 13:44:20 [31462] vm2      attrd: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish cib_rw connection: Connection refused (111)
Nov 13 13:44:20 [31462] vm2      attrd: (cib_native.c:229   )   debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Nov 13 13:44:20 [31462] vm2      attrd: (cib_native.c:272   )   debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Nov 13 13:44:20 [31462] vm2      attrd: (cib_native.c:282   )   debug: cib_native_signoff: 	Signing out of the CIB Service
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31460-32)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31460]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148614620
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31462-31)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31462-31) state:2
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148614d00
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31462-31-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31462-31-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31462-31-header
Nov 13 13:44:20 [31460] vm2 stonith-ng: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31460] vm2 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31460-32-header
Nov 13 13:44:20 [31460] vm2 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31460-32-header
Nov 13 13:44:20 [31460] vm2 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31460-32-header
Nov 13 13:44:20 [31460] vm2 stonith-ng: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:20 [31460] vm2 stonith-ng: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [31460] vm2 stonith-ng: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:20 [31460] vm2 stonith-ng: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish cib_rw connection: Connection refused (111)
Nov 13 13:44:20 [31460] vm2 stonith-ng: (cib_native.c:229   )   debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Nov 13 13:44:20 [31460] vm2 stonith-ng: (cib_native.c:272   )   debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Nov 13 13:44:20 [31460] vm2 stonith-ng: (cib_native.c:282   )   debug: cib_native_signoff: 	Signing out of the CIB Service
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31460-32)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31460-32) state:2
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148614620
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31460-32-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31460-32-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31460-32-header
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31459-31)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31459]
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148614c90
Nov 13 13:44:20 [31459] vm2        cib: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [31459] vm2        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31459-31-header
Nov 13 13:44:20 [31459] vm2        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31459-31-header
Nov 13 13:44:20 [31459] vm2        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31459-31-header
Nov 13 13:44:20 [31459] vm2        cib: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:20 [31459] vm2        cib: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [31459] vm2        cib: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:20 [31459] vm2        cib: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: cib_ro
Nov 13 13:44:20 [31459] vm2        cib: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: cib_rw
Nov 13 13:44:20 [31459] vm2        cib: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: cib_shm
Nov 13 13:44:20 [31459] vm2        cib: (      main.c:550   )    info: cib_init: 	Starting cib mainloop
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [31455] vm2 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31459
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31459-31)
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31459-31) state:2
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148614c90
Nov 13 13:44:20 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31459-31-header
Nov 13 13:44:20 [31459] vm2        cib: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] cib.3232261518 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31459-31-header
Nov 13 13:44:20 [31459] vm2        cib: (membership.c:399   )    info: crm_get_peer: 	Created entry b2901563-7693-4187-84ce-efa76e0af4c1/0xbcbbc0 for node (null)/3232261517 (2 total)
Nov 13 13:44:20 [31459] vm2        cib: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 [31459] vm2        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] cib.3232261517 
Nov 13 13:44:20 [31459] vm2        cib: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:20 [31459] vm2        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.1] cib.3232261518 
Nov 13 13:44:20 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31459-31-header
Nov 13 13:44:20 [31459] vm2        cib: (     utils.c:1216  )   debug: get_last_sequence: 	Series file /var/lib/pacemaker/cib/cib.last does not exist
Nov 13 13:44:20 [31459] vm2        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:44:20 [31459] vm2        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.0.0 of the CIB to disk (digest: 978cb58a57d1ff0f3e53e793331143d7)
Nov 13 13:44:20 [31459] vm2        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 978cb58a57d1ff0f3e53e793331143d7 to disk
Nov 13 13:44:20 [31459] vm2        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.TidZSC (digest: /var/lib/pacemaker/cib/cib.g2LB8U)
Nov 13 13:44:21 [31459] vm2        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.TidZSC
Nov 13 13:44:21 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 467
Nov 13 13:44:21 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 465
Nov 13 13:44:21 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 464
Nov 13 13:44:21 [31459] vm2        cib: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[1.0] cib.3232261519 
Nov 13 13:44:21 [31459] vm2        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.0] cib.3232261517 
Nov 13 13:44:21 [31459] vm2        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.1] cib.3232261518 
Nov 13 13:44:21 [31459] vm2        cib: (membership.c:399   )    info: crm_get_peer: 	Created entry ac162f3c-00db-47b5-97aa-07743db1cf87/0xbcc370 for node (null)/3232261519 (3 total)
Nov 13 13:44:21 [31459] vm2        cib: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 [31459] vm2        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.2] cib.3232261519 
Nov 13 13:44:21 [31459] vm2        cib: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15883
Nov 13 13:44:21 [31459] vm2        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0xbcc3e0 for uid=496 gid=492 pid=31464 id=75bfca14-389a-44dd-baa6-e2249c558c46
Nov 13 13:44:21 [31459] vm2        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (31459-31464-10)
Nov 13 13:44:21 [31459] vm2        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [31464]
Nov 13 13:44:21 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31464] vm2       crmd: (cib_native.c:268   )   debug: cib_native_signon_raw: 	Connection to CIB successful
Nov 13 13:44:21 [31459] vm2        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_refresh_notify callbacks for crmd (75bfca14-389a-44dd-baa6-e2249c558c46): on
Nov 13 13:44:21 [31459] vm2        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crmd (75bfca14-389a-44dd-baa6-e2249c558c46): on
Nov 13 13:44:21 [31464] vm2       crmd: (       cib.c:215   )    info: do_cib_control: 	CIB connection established
Nov 13 13:44:21 [31464] vm2       crmd: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:21 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31464-31)
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31464]
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [31444] vm2 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7f51487293e0, cpd=0x7f5148614e24
Nov 13 13:44:21 [31464] vm2       crmd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261518
Nov 13 13:44:21 [31464] vm2       crmd: (membership.c:399   )    info: crm_get_peer: 	Created entry fa843376-669b-440e-b3be-9e1b531811cf/0x22a4c50 for node (null)/3232261518 (1 total)
Nov 13 13:44:21 [31464] vm2       crmd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:21 [31464] vm2       crmd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31464-32)
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31464]
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148613770
Nov 13 13:44:21 [31464] vm2       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-32-header
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-32-header
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-32-header
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:21 [31464] vm2       crmd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:21 [31464] vm2       crmd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:21 [31464] vm2       crmd: ( callbacks.c:118   )    info: peer_update_callback: 	vm2 is now (null)
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:255   )   debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Nov 13 13:44:21 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31464
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31464-32)
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31464-32) state:2
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148613770
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-32-header
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-32-header
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-32-header
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31464-32)
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31464]
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:quorum_lib_init_fn:316 lib_init_fn: conn=0x7f51486143c0
Nov 13 13:44:21 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_gettype:471 got quorum_type request on 0x7f51486143c0
Nov 13 13:44:21 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_getquorate:395 got quorate request on 0x7f51486143c0
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:273   )  notice: cluster_connect_quorum: 	Quorum acquired
Nov 13 13:44:21 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:412 got trackstart request on 0x7f51486143c0
Nov 13 13:44:21 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:420 sending initial status to 0x7f51486143c0
Nov 13 13:44:21 [31444] vm2 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to 0x7f51486143c0, length = 60
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31464-33)
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31464]
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148719ca0
Nov 13 13:44:21 [31464] vm2       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-33-header
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-33-header
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-33-header
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31464-33)
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31464-33) state:2
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148719ca0
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-33-header
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-33-header
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-33-header
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31464-33)
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31464]
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148719ca0
Nov 13 13:44:21 [31464] vm2       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-33-header
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-33-header
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-33-header
Nov 13 13:44:21 [31464] vm2       crmd: (   control.c:146   )    info: do_ha_control: 	Connected to the cluster
Nov 13 13:44:21 [31464] vm2       crmd: (       lrm.c:299   )   debug: do_lrm_control: 	Connecting to the LRM
Nov 13 13:44:21 [31464] vm2       crmd: (lrmd_client.:938   )    info: lrmd_ipc_connect: 	Connecting to lrmd
Nov 13 13:44:21 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.0.0)
Nov 13 13:44:21 [31462] vm2      attrd: (      main.c:149   )   debug: attrd_cib_connect: 	CIB signon attempt 2
Nov 13 13:44:21 [31459] vm2        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0xa1abe0 for uid=496 gid=492 pid=31462 id=d0c3e73b-a852-467e-b374-c879c3a17325
Nov 13 13:44:21 [31459] vm2        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (31459-31462-11)
Nov 13 13:44:21 [31459] vm2        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [31462]
Nov 13 13:44:21 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31461] vm2       lrmd: (      main.c:89    )   trace: lrmd_ipc_accept: 	Connection 0x9a1c10
Nov 13 13:44:21 [31461] vm2       lrmd: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x9a1c10 for uid=496 gid=492 pid=31464 id=a8bb49c2-413e-47b0-9c1f-7223841526a5
Nov 13 13:44:21 [31461] vm2       lrmd: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (31461-31464-6)
Nov 13 13:44:21 [31461] vm2       lrmd: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [31464]
Nov 13 13:44:21 [31461] vm2       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [31461] vm2       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [31461] vm2       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31464-33)
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31464-33) state:2
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148719ca0
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-33-header
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-33-header
Nov 13 13:44:21 [31461] vm2       lrmd: (      main.c:99    )   trace: lrmd_ipc_created: 	Connection 0x9a1c10
Nov 13 13:44:21 [31461] vm2       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (a8bb49c2-413e-47b0-9c1f-7223841526a5) with msg id 6
Nov 13 13:44:21 [31464] vm2       crmd: (       lrm.c:321   )    info: do_lrm_control: 	LRM connection established
Nov 13 13:44:21 [31464] vm2       crmd: (   control.c:768   )    info: do_started: 	Delaying start, no membership data (0000000000100000)
Nov 13 13:44:21 [31464] vm2       crmd: (  messages.c:90    )   debug: register_fsa_input_adv: 	Stalling the FSA pending further input: source=do_started cause=C_FSA_INTERNAL data=(nil) queue=0
Nov 13 13:44:21 [31464] vm2       crmd: (       fsa.c:240   )   debug: s_crmd_fsa: 	Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
Nov 13 13:44:21 [31464] vm2       crmd: (      main.c:142   )   trace: crmd_init: 	Starting crmd's mainloop
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:191   )    info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Nov 13 13:44:21 [31464] vm2       crmd: (membership.c:399   )    info: crm_get_peer: 	Created entry 985ec8e9-63b0-489b-a16f-50c5e51157f8/0x23e9520 for node (null)/3232261517 (2 total)
Nov 13 13:44:21 [31464] vm2       crmd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261517
Nov 13 13:44:21 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-33-header
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31464-33)
Nov 13 13:44:21 [31461] vm2       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed register operation from a8bb49c2-413e-47b0-9c1f-7223841526a5: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31464]
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31459] vm2        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0xc4d380 for uid=0 gid=0 pid=31460 id=33d0d3e6-9fa9-402f-8c59-704260f2b8eb
Nov 13 13:44:21 [31459] vm2        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (31459-31460-12)
Nov 13 13:44:21 [31459] vm2        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [31460]
Nov 13 13:44:21 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.0.0)
Nov 13 13:44:21 [31462] vm2      attrd: (cib_native.c:268   )   debug: cib_native_signon_raw: 	Connection to CIB successful
Nov 13 13:44:21 [31462] vm2      attrd: (      main.c:159   )    info: attrd_cib_connect: 	Connected to the CIB after 2 attempts
Nov 13 13:44:21 [31459] vm2        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_refresh_notify callbacks for attrd (d0c3e73b-a852-467e-b374-c879c3a17325): on
Nov 13 13:44:21 [31462] vm2      attrd: (      main.c:335   )    info: main: 	CIB connection active
Nov 13 13:44:21 [31462] vm2      attrd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] attrd.3232261518 
Nov 13 13:44:21 [31462] vm2      attrd: (membership.c:399   )    info: crm_get_peer: 	Created entry a03a649d-0e44-4422-9c29-a7150e3a3abd/0x876f00 for node (null)/3232261517 (2 total)
Nov 13 13:44:21 [31462] vm2      attrd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:21 [31462] vm2      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] attrd.3232261517 
Nov 13 13:44:21 [31462] vm2      attrd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:21 [31462] vm2      attrd: (membership.c:607   )  notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:21 [31462] vm2      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.1] attrd.3232261518 
Nov 13 13:44:21 [31462] vm2      attrd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[1.0] attrd.3232261519 
Nov 13 13:44:21 [31462] vm2      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.0] attrd.3232261517 
Nov 13 13:44:21 [31462] vm2      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.1] attrd.3232261518 
Nov 13 13:44:21 [31462] vm2      attrd: (membership.c:399   )    info: crm_get_peer: 	Created entry 24830da6-ef8e-47a4-ba48-af3015946da6/0x876f70 for node (null)/3232261519 (3 total)
Nov 13 13:44:21 [31462] vm2      attrd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 [31462] vm2      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.2] attrd.3232261519 
Nov 13 13:44:21 [31462] vm2      attrd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 [31462] vm2      attrd: (membership.c:607   )  notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261519] - state is now member (was (null))
Nov 13 13:44:21 [31460] vm2 stonith-ng: (cib_native.c:268   )   debug: cib_native_signon_raw: 	Connection to CIB successful
Nov 13 13:44:21 [31459] vm2        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crmd (33d0d3e6-9fa9-402f-8c59-704260f2b8eb): on
Nov 13 13:44:21 [31460] vm2 stonith-ng: (      main.c:978   )  notice: setup_cib: 	Watching for stonith topology changes
Nov 13 13:44:21 [31460] vm2 stonith-ng: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: stonith-ng
Nov 13 13:44:21 [31460] vm2 stonith-ng: (      main.c:1208  )    info: main: 	Starting stonith-ng mainloop
Nov 13 13:44:21 [31460] vm2 stonith-ng: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] stonith-ng.3232261518 
Nov 13 13:44:21 [31460] vm2 stonith-ng: (membership.c:399   )    info: crm_get_peer: 	Created entry 2fc2a3ca-4e97-4f23-9299-b0de2ef1d581/0xfed790 for node (null)/3232261517 (2 total)
Nov 13 13:44:21 [31460] vm2 stonith-ng: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:21 [31460] vm2 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] stonith-ng.3232261517 
Nov 13 13:44:21 [31460] vm2 stonith-ng: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:21 [31460] vm2 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261517
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Nov 13 13:44:21 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148719ca0
Nov 13 13:44:21 [31464] vm2       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-33-header
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-33-header
Nov 13 13:44:21 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-33-header
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:21 [31464] vm2       crmd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Nov 13 13:44:21 [31464] vm2       crmd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm2[3232261518] - state is now member (was (null))
Nov 13 13:44:21 [31464] vm2       crmd: ( callbacks.c:124   )    info: peer_update_callback: 	vm2 is now member (was (null))
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Nov 13 13:44:21 [31464] vm2       crmd: (membership.c:399   )    info: crm_get_peer: 	Created entry 356c3337-e8fa-4139-8e23-ed1f23d8a87e/0x23ebdb0 for node (null)/3232261519 (3 total)
Nov 13 13:44:21 [31464] vm2       crmd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 [31464] vm2       crmd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261519
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31460-34)
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31460]
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148715700
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31464-33)
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31464-33) state:2
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148719ca0
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-33-header
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-33-header
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-33-header
Nov 13 13:44:22 [31460] vm2 stonith-ng: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:22 [31460] vm2 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31460-34-header
Nov 13 13:44:22 [31460] vm2 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31460-34-header
Nov 13 13:44:22 [31460] vm2 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31460-34-header
Nov 13 13:44:22 [31460] vm2 stonith-ng: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:22 [31460] vm2 stonith-ng: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:22 [31460] vm2 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.1] stonith-ng.3232261518 
Nov 13 13:44:22 [31460] vm2 stonith-ng: (      main.c:878   )    info: init_cib_cache_cb: 	Updating device list from the cib: init
Nov 13 13:44:22 [31460] vm2 stonith-ng: (      main.c:568   )   trace: fencing_topology_init: 	Pushing in stonith topology
Nov 13 13:44:22 [31460] vm2 stonith-ng: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:44:22 [31460] vm2 stonith-ng: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:44:22 [31460] vm2 stonith-ng: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:44:22 [31460] vm2 stonith-ng: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:44:22 [31460] vm2 stonith-ng: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:44:22 [31460] vm2 stonith-ng: (    unpack.c:155   )   debug: unpack_config: 	On loss of CCM Quorum: Stop ALL resources
Nov 13 13:44:22 [31460] vm2 stonith-ng: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:44:22 [31460] vm2 stonith-ng: (    unpack.c:486   )    info: unpack_nodes: 	Creating a fake local node
Nov 13 13:44:22 [31460] vm2 stonith-ng: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31460-34)
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31460-34) state:2
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148715700
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31460-34-header
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31460-34-header
Nov 13 13:44:22 [31460] vm2 stonith-ng: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[1.0] stonith-ng.3232261519 
Nov 13 13:44:22 [31460] vm2 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.0] stonith-ng.3232261517 
Nov 13 13:44:22 [31460] vm2 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.1] stonith-ng.3232261518 
Nov 13 13:44:22 [31460] vm2 stonith-ng: (membership.c:399   )    info: crm_get_peer: 	Created entry d625e56e-67d3-4188-b48e-b500c95d4cbb/0xff2770 for node (null)/3232261519 (3 total)
Nov 13 13:44:22 [31460] vm2 stonith-ng: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:22 [31460] vm2 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.2] stonith-ng.3232261519 
Nov 13 13:44:22 [31460] vm2 stonith-ng: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:22 [31460] vm2 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261519
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31460-34-header
Nov 13 13:44:22 [31460] vm2 stonith-ng: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:22 [31460] vm2 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261517
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31464-33)
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31464]
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148719ca0
Nov 13 13:44:22 [31464] vm2       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-33-header
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-33-header
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-33-header
Nov 13 13:44:22 [31464] vm2       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:22 [31464] vm2       crmd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Nov 13 13:44:22 [31464] vm2       crmd: (membership.c:81    )   debug: post_cache_update: 	Updated cache after membership event 12.
Nov 13 13:44:22 [31464] vm2       crmd: (membership.c:95    )   debug: post_cache_update: 	post_cache_update added action A_ELECTION_CHECK to the FSA
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31464-33)
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31464-33) state:2
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148719ca0
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-33-header
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-33-header
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-33-header
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31464-33)
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31464]
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148715700
Nov 13 13:44:22 [31464] vm2       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-33-header
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-33-header
Nov 13 13:44:22 [31464] vm2       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-33-header
Nov 13 13:44:22 [31464] vm2       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:22 [31464] vm2       crmd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:22 [31464] vm2       crmd: (   control.c:786   )    info: do_started: 	Delaying start, Config not read (0000000000000040)
Nov 13 13:44:22 [31464] vm2       crmd: (  messages.c:90    )   debug: register_fsa_input_adv: 	Stalling the FSA pending further input: source=do_started cause=C_FSA_INTERNAL data=(nil) queue=0
Nov 13 13:44:22 [31464] vm2       crmd: (       fsa.c:240   )   debug: s_crmd_fsa: 	Exiting the FSA: queue=0, fsa_actions=0x200000002, stalled=true
Nov 13 13:44:22 [31464] vm2       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 4 : Parsing CIB options
Nov 13 13:44:22 [31464] vm2       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:44:22 [31464] vm2       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:44:22 [31464] vm2       crmd: (   control.c:812   )   debug: do_started: 	Init server comms
Nov 13 13:44:22 [31464] vm2       crmd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: crmd
Nov 13 13:44:22 [31464] vm2       crmd: (   control.c:827   )  notice: do_started: 	The local CRM is operational
Nov 13 13:44:22 [31464] vm2       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:44:22 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
Nov 13 13:44:22 [31464] vm2       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_started() received in state S_STARTING
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31464-33)
Nov 13 13:44:22 [31464] vm2       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Nov 13 13:44:22 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.0.0)
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31464-33) state:2
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148715700
Nov 13 13:44:22 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31464-33-header
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31464-33-header
Nov 13 13:44:22 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31464-33-header
Nov 13 13:44:22 [31444] vm2 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 469
Nov 13 13:44:22 [31460] vm2 stonith-ng: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:22 [31460] vm2 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261519
Nov 13 13:44:23 [31464] vm2       crmd: (join_client.:46    )   debug: do_cl_join_query: 	Querying for a DC
Nov 13 13:44:23 [31464] vm2       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=17
Nov 13 13:44:23 [31464] vm2       crmd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] crmd.3232261518 
Nov 13 13:44:23 [31464] vm2       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] crmd.3232261517 
Nov 13 13:44:23 [31464] vm2       crmd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:23 [31464] vm2       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.1] crmd.3232261518 
Nov 13 13:44:23 [31464] vm2       crmd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[1.0] crmd.3232261519 
Nov 13 13:44:23 [31464] vm2       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.0] crmd.3232261517 
Nov 13 13:44:23 [31464] vm2       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.1] crmd.3232261518 
Nov 13 13:44:23 [31464] vm2       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.2] crmd.3232261519 
Nov 13 13:44:23 [31464] vm2       crmd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:23 [31464] vm2       crmd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:23 [31464] vm2       crmd: ( callbacks.c:118   )    info: peer_update_callback: 	vm3 is now member
Nov 13 13:44:23 [31464] vm2       crmd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:23 [31464] vm2       crmd: ( callbacks.c:118   )    info: peer_update_callback: 	vm1 is now member
Nov 13 13:44:23 [31464] vm2       crmd: (  te_utils.c:248   )   debug: te_connect_stonith: 	Attempting connection to fencing daemon...
Nov 13 13:44:24 [31460] vm2 stonith-ng: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0xff4900 for uid=496 gid=492 pid=31464 id=d20e141c-69f8-4300-ba16-268f3c5cae0c
Nov 13 13:44:24 [31460] vm2 stonith-ng: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (31460-31464-9)
Nov 13 13:44:24 [31460] vm2 stonith-ng: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [31464]
Nov 13 13:44:24 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [31460] vm2 stonith-ng: (      main.c:87    )   trace: st_ipc_created: 	Connection created for 0xff4900
Nov 13 13:44:24 [31460] vm2 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 9 from crmd.31464
Nov 13 13:44:24 [31460] vm2 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command t="stonith-ng" st_op="register" st_clientname="crmd.31464" st_clientid="d20e141c-69f8-4300-ba16-268f3c5cae0c" st_clientnode="vm2"/>
Nov 13 13:44:24 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing register 9 from crmd.31464 (               0)
Nov 13 13:44:24 [31464] vm2       crmd: ( st_client.c:1639  )   debug: stonith_api_signon: 	Connection to STONITH successful
Nov 13 13:44:24 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed register from crmd.31464: OK (0)
Nov 13 13:44:24 [31460] vm2 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 10 from crmd.31464
Nov 13 13:44:24 [31460] vm2 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_set_notification st_op="st_notify" st_notify_activate="st_notify_disconnect" st_clientid="d20e141c-69f8-4300-ba16-268f3c5cae0c" st_clientname="crmd.31464" st_clientnode="vm2"/>
Nov 13 13:44:24 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify 10 from crmd.31464 (               0)
Nov 13 13:44:24 [31460] vm2 stonith-ng: (  commands.c:1822  )   debug: handle_request: 	Setting st_notify_disconnect callbacks for crmd.31464 (d20e141c-69f8-4300-ba16-268f3c5cae0c): ON
Nov 13 13:44:24 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify from crmd.31464: OK (0)
Nov 13 13:44:24 [31460] vm2 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 11 from crmd.31464
Nov 13 13:44:24 [31460] vm2 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_set_notification st_op="st_notify" st_notify_activate="st_notify_fence" st_clientid="d20e141c-69f8-4300-ba16-268f3c5cae0c" st_clientname="crmd.31464" st_clientnode="vm2"/>
Nov 13 13:44:24 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify 11 from crmd.31464 (               0)
Nov 13 13:44:24 [31460] vm2 stonith-ng: (  commands.c:1822  )   debug: handle_request: 	Setting st_notify_fence callbacks for crmd.31464 (d20e141c-69f8-4300-ba16-268f3c5cae0c): ON
Nov 13 13:44:24 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify from crmd.31464: OK (0)
Nov 13 13:44:42 [31464] vm2       crmd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:42 [31464] vm2       crmd: (  election.c:172   )   debug: crm_uptime: 	Current CPU usage is: 0s, 28995us
Nov 13 13:44:42 [31464] vm2       crmd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.28995 vs 0.31995 (usec)
Nov 13 13:44:42 [31464] vm2       crmd: (  election.c:511   )    info: election_count_vote: 	Election 1 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:44:42 [31464] vm2       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:44:42 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:44:42 [31464] vm2       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Nov 13 13:44:42 [31464] vm2       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=19
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31459-33)
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31459]
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31459] vm2        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:43 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148715700
Nov 13 13:44:43 [31459] vm2        cib: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:43 [31459] vm2        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31459-33-header
Nov 13 13:44:43 [31459] vm2        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31459-33-header
Nov 13 13:44:43 [31459] vm2        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31459-33-header
Nov 13 13:44:43 [31459] vm2        cib: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:43 [31459] vm2        cib: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:43 [31459] vm2        cib: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.0.0
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.0.1 335eff11d8e47ed96126ba44f4ec45e7
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="0"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="0" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8"/>
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section cib: OK (rc=0, origin=vm1/crmd/7, version=0.0.1)
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:44:43 [31464] vm2       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-1
Nov 13 13:44:43 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:981   )    info: update_dc: 	Set DC to vm1 (3.0.8)
Nov 13 13:44:43 [31464] vm2       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:44:43 [31464] vm2       crmd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.28995 vs 0.31995 (usec)
Nov 13 13:44:43 [31464] vm2       crmd: (  election.c:511   )    info: election_count_vote: 	Election 2 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:984   )    info: update_dc: 	Unset DC. Was vm1
Nov 13 13:44:43 [31464] vm2       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:44:43 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:44:43 [31464] vm2       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=22
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.1.1
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="0" admin_epoch="0" num_updates="1"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="1" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </cluster_property_set>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31464] vm2       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-2
Nov 13 13:44:43 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:981   )    info: update_dc: 	Set DC to vm1 (3.0.8)
Nov 13 13:44:43 [31464] vm2       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/9, version=0.1.1)
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31459-33)
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/6, version=0.1.1)
Nov 13 13:44:43 [31464] vm2       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 6 : Parsing CIB options
Nov 13 13:44:43 [31464] vm2       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:44:43 [31464] vm2       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/7, version=0.1.1)
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/8, version=0.1.1)
Nov 13 13:44:43 [31464] vm2       crmd: (join_client.:157   )   debug: join_query_callback: 	Respond to join offer join-2
Nov 13 13:44:43 [31464] vm2       crmd: (join_client.:158   )   debug: join_query_callback: 	Acknowledging vm1 as our DC
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.2.1
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="1" admin_epoch="0" num_updates="1"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="2" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/11, version=0.2.1)
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.2.1)
Nov 13 13:44:43 [31464] vm2       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 9 : Parsing CIB options
Nov 13 13:44:43 [31464] vm2       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:44:43 [31464] vm2       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31459-33) state:2
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:43 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:43 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148715700
Nov 13 13:44:43 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31459-33-header
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31459-33-header
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31459-33-header
Nov 13 13:44:43 [31459] vm2        cib: (     utils.c:1216  )   debug: get_last_sequence: 	Series file /var/lib/pacemaker/cib/cib.last does not exist
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-0.raw
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:44:43 [31459] vm2        cib: (   cib_ops.c:222   )    info: cib_process_replace: 	Digest matched on replace from vm1: 360fde11e7cf93696f974eea17cffd9b
Nov 13 13:44:43 [31459] vm2        cib: (   cib_ops.c:258   )    info: cib_process_replace: 	Replaced 0.2.1 with 0.2.1 from vm1
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_replace op
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_replace operation for section 'all': OK (rc=0, origin=vm1/crmd/17, version=0.2.1)
Nov 13 13:44:43 [31464] vm2       crmd: (  messages.c:733   )   debug: handle_request: 	Raising I_JOIN_RESULT: join-2
Nov 13 13:44:43 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [31464] vm2       crmd: (join_client.:231   )   debug: do_cl_join_finalize_respond: 	Confirming join join-2: join_ack_nack
Nov 13 13:44:43 [31464] vm2       crmd: (join_client.:240   )   debug: do_cl_join_finalize_respond: 	join-2: Join complete.  Sending local LRM status to vm1
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:1011  )    info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm2']/transient_attributes
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:1032  )    info: update_attrd_helper: 	Connecting to attrd... 5 retries remaining
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.3.1
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="2" admin_epoch="0" num_updates="1"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="3" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <nodes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <node id="3232261519" uname="vm3"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </nodes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31462] vm2      attrd: (      main.c:186   )   trace: attrd_ipc_accept: 	Connection 0x874000
Nov 13 13:44:43 [31462] vm2      attrd: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x874000 for uid=496 gid=492 pid=31464 id=f389c2a1-b596-496a-9b25-1c3be7932149
Nov 13 13:44:43 [31462] vm2      attrd: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (31462-31464-9)
Nov 13 13:44:43 [31462] vm2      attrd: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [31464]
Nov 13 13:44:43 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/18, version=0.3.1)
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.4.1
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="3" admin_epoch="0" num_updates="1"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="4" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <nodes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <node id="3232261517" uname="vm1"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </nodes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.2.0 of the CIB to disk (digest: 7c397f6c57041145e23f3494e809aec1)
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/19, version=0.4.1)
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_delete operation for section //node_state[@uname='vm2']/transient_attributes to master (origin=local/crmd/10)
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 7c397f6c57041145e23f3494e809aec1 to disk
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.4OH9Nt (digest: /var/lib/pacemaker/cib/cib.PwoEkK)
Nov 13 13:44:43 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.4OH9Nt
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.5.1
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="4" admin_epoch="0" num_updates="1"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <nodes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <node id="3232261518" uname="vm2"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </nodes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [31464] vm2       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: terminate=(null) for vm2
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: shutdown=(null) for vm2
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: (null)=(null) for localhost
Nov 13 13:44:43 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 13 13:44:43 [31464] vm2       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Nov 13 13:44:43 [31464] vm2       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/20, version=0.5.1)
Nov 13 13:44:43 [31462] vm2      attrd: (      main.c:201   )   trace: attrd_ipc_created: 	Connection 0x874000
Nov 13 13:44:43 [31462] vm2      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 31464 (0x874000)
Nov 13 13:44:43 [31462] vm2      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="terminate" attr_section="status" attr_host="vm2" attr_is_remote="0"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:227   )    info: attrd_client_message: 	Starting an election to determine the writer
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:172   )   debug: crm_uptime: 	Current CPU usage is: 0s, 20996us
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (31445-31462-33)
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [31462]
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:43 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f5148715700
Nov 13 13:44:43 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31462] vm2      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [31459] vm2        cib: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-1.raw
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.1
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.2 96afa5ea158751708b6aaa2afbd9266e
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="1"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="2" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261519">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </node_state>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/23, version=0.5.2)
Nov 13 13:44:43 [31462] vm2      attrd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:43 [31462] vm2      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-31445-31462-33-header
Nov 13 13:44:43 [31462] vm2      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-31445-31462-33-header
Nov 13 13:44:43 [31462] vm2      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-31445-31462-33-header
Nov 13 13:44:43 [31462] vm2      attrd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:43 [31462] vm2      attrd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:242   )   debug: election_vote: 	Started election 1
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting terminate[vm2] = (null)
Nov 13 13:44:43 [31462] vm2      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 31464 (0x874000)
Nov 13 13:44:43 [31462] vm2      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="shutdown" attr_section="status" attr_host="vm2" attr_is_remote="0"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting shutdown[vm2] = (null)
Nov 13 13:44:43 [31462] vm2      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 31464 (0x874000)
Nov 13 13:44:43 [31462] vm2      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="refresh" attr_section="status" attr_is_remote="0"/>
Nov 13 13:44:43 [31462] vm2      attrd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:200   )   debug: crm_compare_age: 	Win: 0.20996 vs 0.16997 (usec)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:490   )    info: election_count_vote: 	Election 1 (owner: 3232261517) pass: vote from vm1 (Uptime)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:242   )   debug: election_vote: 	Started election 2
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute terminate with no delay
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting terminate[vm1] to (null) from vm1
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:459   )   trace: write_or_elect_attribute: 	Election in progress to determine who will write out terminate
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute shutdown with no delay
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting shutdown[vm1] to (null) from vm1
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:459   )   trace: write_or_elect_attribute: 	Election in progress to determine who will write out shutdown
Nov 13 13:44:43 [31462] vm2      attrd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:200   )   debug: crm_compare_age: 	Win: 0.20996 vs 0.14997 (usec)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:490   )    info: election_count_vote: 	Election 1 (owner: 3232261519) pass: vote from vm3 (Uptime)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:242   )   debug: election_vote: 	Started election 3
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting terminate[vm3] to (null) from vm3
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:459   )   trace: write_or_elect_attribute: 	Election in progress to determine who will write out terminate
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting shutdown[vm3] to (null) from vm3
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:459   )   trace: write_or_elect_attribute: 	Election in progress to determine who will write out shutdown
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:200   )   debug: crm_compare_age: 	Win: 0.20996 vs 0.16997 (usec)
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.5.0 of the CIB to disk (digest: 630d79f602055b52fd2ea79fdbd1baf8)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:490   )    info: election_count_vote: 	Election 2 (owner: 3232261517) pass: vote from vm1 (Uptime)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:242   )   debug: election_vote: 	Started election 4
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.2
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.3 dd02c1675f04ba6ab7d94c1f96067ad9
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="2"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="3" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261517">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </node_state>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/25, version=0.5.3)
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:998   )   debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm2']/transient_attributes": OK (rc=0)
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 630d79f602055b52fd2ea79fdbd1baf8 to disk
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.29KSZA (digest: /var/lib/pacemaker/cib/cib.pyh1KR)
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.3
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.4 f4a55aa279b990bb05b0a588767e25f0
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="3"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="4" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261518">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </node_state>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31459] vm2        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.29KSZA
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/27, version=0.5.4)
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (31445-31462-33)
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(31445-31462-33) state:2
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:43 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:43 [31444] vm2 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f5148715700
Nov 13 13:44:43 [31444] vm2 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-31445-31462-33-header
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-31445-31462-33-header
Nov 13 13:44:43 [31444] vm2 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-31445-31462-33-header
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section cib: OK (rc=0, origin=vm1/crmd/30, version=0.5.5)
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.4
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.5 3118925a5f456f332e09aade04800ea0
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="4"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="5" num_updates="5" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 3 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting terminate[vm2] to (null) from vm2
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:459   )   trace: write_or_elect_attribute: 	Election in progress to determine who will write out terminate
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting shutdown[vm2] to (null) from vm2
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:459   )   trace: write_or_elect_attribute: 	Election in progress to determine who will write out shutdown
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 3 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 3 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:485   )   debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261518): Processed vote from vm2 (Recorded)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 2 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 2 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 2 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 2 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 2 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:485   )   debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261518): Processed no-vote from vm1 (Recorded)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 1 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 1 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 1 non-votes (3 total)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:485   )   debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261518): Processed no-vote from vm3 (Recorded)
Nov 13 13:44:43 [31462] vm2      attrd: (  election.c:50    )    info: election_timer_cb: 	Election election-attrd complete
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:330   )   debug: attrd_peer_sync: 	Syncing shutdown[vm1] = (null) to everyone
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:330   )   debug: attrd_peer_sync: 	Syncing shutdown[vm2] = (null) to everyone
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:330   )   debug: attrd_peer_sync: 	Syncing shutdown[vm3] = (null) to everyone
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:330   )   debug: attrd_peer_sync: 	Syncing terminate[vm1] = (null) to everyone
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:330   )   debug: attrd_peer_sync: 	Syncing terminate[vm2] = (null) to everyone
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:330   )   debug: attrd_peer_sync: 	Syncing terminate[vm3] = (null) to everyone
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:335   )   debug: attrd_peer_sync: 	Syncing values to everyone
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm1[shutdown]=(null) (3232261517 3232261517 3232261517 vm1)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm2[shutdown]=(null) (3232261518 3232261518 3232261518 vm2)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm3[shutdown]=(null) (3232261519 3232261519 3232261519 vm3)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   <status>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261517">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261517">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261517">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261517-shutdown" name="shutdown" value="" __delete__="value"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261518">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261518">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261518">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261518-shutdown" name="shutdown" value="" __delete__="value"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261519-shutdown" name="shutdown" value="" __delete__="value"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   </status>
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/attrd/2)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:720   )  notice: write_attribute: 	Sent update 2 with 3 changes for shutdown, id=<n/a>, set=(null)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm1[terminate]=(null) (3232261517 3232261517 3232261517 vm1)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm2[terminate]=(null) (3232261518 3232261518 3232261518 vm2)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm3[terminate]=(null) (3232261519 3232261519 3232261519 vm3)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   <status>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261517">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261517">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261517">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261517-terminate" name="terminate" value="" __delete__="value"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261518">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261518">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261518">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261518-terminate" name="terminate" value="" __delete__="value"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261519-terminate" name="terminate" value="" __delete__="value"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   </status>
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/attrd/3)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:720   )  notice: write_attribute: 	Sent update 3 with 3 changes for terminate, id=<n/a>, set=(null)
Nov 13 13:44:43 [31464] vm2       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: probe_complete=true for vm2
Nov 13 13:44:43 [31462] vm2      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 31464 (0x874000)
Nov 13 13:44:43 [31462] vm2      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="probe_complete" attr_value="true" attr_section="status" attr_host="vm2" attr_is_remote="0"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting probe_complete[vm2] = true (writer)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute probe_complete with no delay
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting probe_complete[vm3] to true from vm3
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm3[probe_complete]=true (3232261519 3232261519 3232261519 vm3)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   <status>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261519-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   </status>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:720   )  notice: write_attribute: 	Sent update 4 with 1 changes for probe_complete, id=<n/a>, set=(null)
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/attrd/4)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting probe_complete[vm1] to true from vm1
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:666   )    info: write_attribute: 	Write out of probe_complete delayed: update 4 in progress
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.5
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.6 1eda82bbbd77cf8a880d3d455765d8d6
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="5"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="6" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <transient_attributes id="3232261519">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="status-3232261519"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </transient_attributes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <transient_attributes id="3232261517">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="status-3232261517"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </transient_attributes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <transient_attributes id="3232261518">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="status-3232261518"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </transient_attributes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/2, version=0.5.6)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:548   )    info: attrd_cib_callback: 	Update 2 for shutdown: OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 2 for shutdown[vm1]=(null): OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 2 for shutdown[vm2]=(null): OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 2 for shutdown[vm3]=(null): OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting probe_complete[vm2] to true from vm2
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:666   )    info: write_attribute: 	Write out of probe_complete delayed: update 4 in progress
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:548   )    info: attrd_cib_callback: 	Update 3 for terminate: OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 3 for terminate[vm1]=(null): OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 3 for terminate[vm2]=(null): OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 3 for terminate[vm3]=(null): OK (0)
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/4, version=0.5.7)
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.6
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.7 ff617ff8b610f67d2056a9b012bdfc03
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="6"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="7" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261519">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261519">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261519-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:548   )    info: attrd_cib_callback: 	Update 4 for probe_complete: OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 4 for probe_complete[vm1]=(null): OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 4 for probe_complete[vm2]=(null): OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 4 for probe_complete[vm3]=true: OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm1[probe_complete]=true (3232261517 3232261517 3232261517 vm1)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm2[probe_complete]=true (3232261518 3232261518 3232261518 vm2)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm3[probe_complete]=true (3232261519 3232261519 3232261519 vm3)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   <status>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261517">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261517">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261517">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261517-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261518">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261518">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261518">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261518-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261519">
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261519-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   </status>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:720   )  notice: write_attribute: 	Sent update 5 with 3 changes for probe_complete, id=<n/a>, set=(null)
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/attrd/5)
Nov 13 13:44:43 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/5, version=0.5.8)
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.7
Nov 13 13:44:43 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.8 f2fe37326dd3c20276f6447b1667415b
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="7"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="8" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261517">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261517">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261517-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261518">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261518">
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261518-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:548   )    info: attrd_cib_callback: 	Update 5 for probe_complete: OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 5 for probe_complete[vm1]=true: OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 5 for probe_complete[vm2]=true: OK (0)
Nov 13 13:44:43 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 5 for probe_complete[vm3]=true: OK (0)
Nov 13 13:44:51 [31464] vm2       crmd: (  throttle.c:651   )   debug: throttle_update: 	Host vm1 supports a maximum of 2 jobs and throttle mode 0000.  New job limit is 2
Nov 13 13:44:51 [31464] vm2       crmd: (  throttle.c:259   )   debug: throttle_cib_load: 	Init 5 + 6 ticks at 1384317891 (100 tps)
Nov 13 13:44:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.100000 (full: 0.10 0.07 0.02 1/108 31474)
Nov 13 13:44:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:44:51 [31464] vm2       crmd: (  throttle.c:520   )   debug: throttle_timer_cb: 	New throttle mode: 0000 (was 0000)
Nov 13 13:44:51 [31464] vm2       crmd: (  throttle.c:499   )    info: throttle_send_command: 	Updated throttle state to 0000
Nov 13 13:44:51 [31464] vm2       crmd: (  throttle.c:651   )   debug: throttle_update: 	Host vm2 supports a maximum of 2 jobs and throttle mode 0000.  New job limit is 2
Nov 13 13:44:52 [31464] vm2       crmd: (  throttle.c:651   )   debug: throttle_update: 	Host vm3 supports a maximum of 2 jobs and throttle mode 0000.  New job limit is 2
Nov 13 13:45:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:45:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.060000 (full: 0.06 0.06 0.02 1/108 31474)
Nov 13 13:45:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:45:33 [31464] vm2       crmd: (  election.c:172   )   debug: crm_uptime: 	Current CPU usage is: 0s, 38994us
Nov 13 13:45:33 [31464] vm2       crmd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.38994 vs 0.53991 (usec)
Nov 13 13:45:33 [31464] vm2       crmd: (  election.c:511   )    info: election_count_vote: 	Election 3 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:45:33 [31464] vm2       crmd: (     utils.c:984   )    info: update_dc: 	Unset DC. Was vm1
Nov 13 13:45:33 [31464] vm2       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:45:33 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:45:33 [31464] vm2       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Nov 13 13:45:33 [31464] vm2       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/11, version=0.5.8)
Nov 13 13:45:33 [31464] vm2       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=26
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:45:33 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.8
Nov 13 13:45:33 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.6.1 e65af88559035840dce69eaec2069fba
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib epoch="5" num_updates="8" admin_epoch="0">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <configuration>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <crm_config>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        </cluster_property_set>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </crm_config>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </configuration>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="6" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="cibadmin" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="no-quorum-policy" value="freeze" id="cib-bootstrap-options-no-quorum-policy"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="startup-fencing" value="false" id="cib-bootstrap-options-startup-fencing"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="stonith-timeout" value="60s" id="cib-bootstrap-options-stonith-timeout"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="crmd-transition-delay" value="2s" id="cib-bootstrap-options-crmd-transition-delay"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <resources>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <primitive id="F1" class="stonith" type="external/libvirt">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="F1-instance_attributes">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair name="hostlist" value="vm3" id="F1-instance_attributes-hostlist"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair name="hypervisor_uri" value="qemu+ssh://bl460g1n6/system" id="F1-instance_attributes-hypervisor_uri"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </instance_attributes>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <operations>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="start" interval="0s" timeout="60s" id="F1-start-0s"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="monitor" interval="3600s" timeout="60s" id="F1-monitor-3600s"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="stop" interval="0s" timeout="60s" id="F1-stop-0s"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </operations>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </primitive>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <primitive id="pDummy" class="ocf" provider="pacemaker" type="Dummy">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <operations>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="monitor" interval="10s" timeout="300s" on-fail="fence" id="pDummy-monitor-10s"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </operations>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </primitive>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </resources>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <constraints>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <rsc_location id="l1" rsc="pDummy">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="100" id="l1-rule">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </rsc_location>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <rsc_location id="l2" rsc="F1">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="100" id="l2-rule">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="100" id="l2-rule-0">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm2" id="l2-expression-0"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="-INFINITY" id="l2-rule-1">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression-1"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </rsc_location>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </constraints>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <fencing-topology>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <fencing-level target="vm3" devices="F1" index="1" id="fencing"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </fencing-topology>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <rsc_defaults>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <meta_attributes id="rsc-options">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="migration-threshold" value="1" id="rsc-options-migration-threshold"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </meta_attributes>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </rsc_defaults>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:521   )   trace: register_fencing_topology: 	Updating vm3[1] (fencing) to F1
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section 'all': OK (rc=0, origin=vm1/cibadmin/2, version=0.6.1)
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.6.1)
Nov 13 13:45:33 [31464] vm2       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 12 : Parsing CIB options
Nov 13 13:45:33 [31464] vm2       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:45:33 [31464] vm2       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:45:33 [31460] vm2 stonith-ng: (  commands.c:970   )    info: stonith_level_remove: 	Node vm3 not found (0 active entries)
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:45:33 [31460] vm2 stonith-ng: (  commands.c:937   )   trace: stonith_level_register: 	Added vm3 to the topology (1 active entries)
Nov 13 13:45:33 [31460] vm2 stonith-ng: (  commands.c:948   )   trace: stonith_level_register: 	Adding device 'F1' for vm3 (1)
Nov 13 13:45:33 [31460] vm2 stonith-ng: (  commands.c:952   )    info: stonith_level_register: 	Node vm3 has 1 active fencing levels
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   <rsc_location id="l1" rsc="pDummy" __crm_diff_marker__="added:top">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="100" id="l1-rule">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   </rsc_location>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   <rsc_location id="l2" rsc="F1" __crm_diff_marker__="added:top">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="100" id="l2-rule">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="100" id="l2-rule-0">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm2" id="l2-expression-0"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="-INFINITY" id="l2-rule-1">
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression-1"/>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   </rsc_location>
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:787   )   trace: update_cib_stonith_devices: 	Fencing resource F1 was added or modified
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:795   )    info: update_cib_stonith_devices: 	Updating device list from the cib: new resource
Nov 13 13:45:33 [31460] vm2 stonith-ng: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:45:33 [31460] vm2 stonith-ng: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:45:33 [31460] vm2 stonith-ng: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:45:33 [31460] vm2 stonith-ng: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:45:33 [31460] vm2 stonith-ng: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:45:33 [31460] vm2 stonith-ng: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:45:33 [31460] vm2 stonith-ng: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:45:33 [31460] vm2 stonith-ng: (    unpack.c:418   ) warning: handle_startup_fencing: 	Blind faith: not fencing unseen nodes
Nov 13 13:45:33 [31460] vm2 stonith-ng: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:666   )    info: cib_device_update: 	Device F1 is allowed on vm2: score=100
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:675   )   trace: cib_device_update: 	 hostlist=vm3
Nov 13 13:45:33 [31460] vm2 stonith-ng: (      main.c:675   )   trace: cib_device_update: 	 hypervisor_uri=qemu+ssh://bl460g1n6/system
Nov 13 13:45:33 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action metadata for agent fence_legacy (target=(null))
Nov 13 13:45:33 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:45:33 [31464] vm2       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-3
Nov 13 13:45:33 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [31464] vm2       crmd: (     utils.c:981   )    info: update_dc: 	Set DC to vm1 (3.0.8)
Nov 13 13:45:33 [31464] vm2       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:45:33 [31464] vm2       crmd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.38994 vs 0.53991 (usec)
Nov 13 13:45:33 [31464] vm2       crmd: (  election.c:511   )    info: election_count_vote: 	Election 4 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:45:33 [31464] vm2       crmd: (     utils.c:984   )    info: update_dc: 	Unset DC. Was vm1
Nov 13 13:45:33 [31464] vm2       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:45:33 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:45:33 [31464] vm2       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Nov 13 13:45:33 [31464] vm2       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=29
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:45:33 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:45:33 [31464] vm2       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-4
Nov 13 13:45:33 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [31464] vm2       crmd: (     utils.c:981   )    info: update_dc: 	Set DC to vm1 (3.0.8)
Nov 13 13:45:33 [31464] vm2       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/39, version=0.7.1)
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/41, version=0.8.1)
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.8.1)
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.8.1)
Nov 13 13:45:33 [31464] vm2       crmd: (join_client.:157   )   debug: join_query_callback: 	Respond to join offer join-4
Nov 13 13:45:33 [31464] vm2       crmd: (join_client.:158   )   debug: join_query_callback: 	Acknowledging vm1 as our DC
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/15, version=0.8.1)
Nov 13 13:45:33 [31464] vm2       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 15 : Parsing CIB options
Nov 13 13:45:33 [31464] vm2       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:45:33 [31464] vm2       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/16, version=0.8.1)
Nov 13 13:45:33 [31464] vm2       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 16 : Parsing CIB options
Nov 13 13:45:33 [31464] vm2       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:45:33 [31464] vm2       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:45:33 [31464] vm2       crmd: (  messages.c:733   )   debug: handle_request: 	Raising I_JOIN_RESULT: join-4
Nov 13 13:45:33 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [31464] vm2       crmd: (join_client.:231   )   debug: do_cl_join_finalize_respond: 	Confirming join join-4: join_ack_nack
Nov 13 13:45:33 [31464] vm2       crmd: (join_client.:240   )   debug: do_cl_join_finalize_respond: 	join-4: Join complete.  Sending local LRM status to vm1
Nov 13 13:45:33 [31464] vm2       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: (null)=(null) for localhost
Nov 13 13:45:33 [31464] vm2       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 13 13:45:33 [31462] vm2      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 31464 (0x874000)
Nov 13 13:45:33 [31462] vm2      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="refresh" attr_section="status" attr_is_remote="0"/>
Nov 13 13:45:33 [31464] vm2       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Nov 13 13:45:33 [31464] vm2       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 13 13:45:33 [31459] vm2        cib: (   cib_ops.c:222   )    info: cib_process_replace: 	Digest matched on replace from vm1: b65668c649a0f8a465a42db6c017bc19
Nov 13 13:45:33 [31459] vm2        cib: (   cib_ops.c:258   )    info: cib_process_replace: 	Replaced 0.8.1 with 0.8.1 from vm1
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_replace op
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_replace operation for section 'all': OK (rc=0, origin=vm1/crmd/47, version=0.8.1)
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-2.raw
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.6.0 of the CIB to disk (digest: 2db643db6cb3c3f1825600265949deb4)
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 2db643db6cb3c3f1825600265949deb4 to disk
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.3jZz9s (digest: /var/lib/pacemaker/cib/cib.c9VpRX)
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.3jZz9s
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section //node_state[@uname='vm3']/lrm: OK (rc=0, origin=vm1/crmd/51, version=0.8.2)
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-3.raw
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/52, version=0.8.3)
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=vm1/crmd/53, version=0.8.4)
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.8.0 of the CIB to disk (digest: 9db35554f5ac4e48336f1bae33d89abc)
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/54, version=0.8.5)
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 9db35554f5ac4e48336f1bae33d89abc to disk
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.GKA9cx (digest: /var/lib/pacemaker/cib/cib.XzSBg2)
Nov 13 13:45:33 [31459] vm2        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.GKA9cx
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section //node_state[@uname='vm2']/lrm: OK (rc=0, origin=vm1/crmd/55, version=0.8.6)
Nov 13 13:45:33 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/56, version=0.8.7)
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( st_client.c:867   )   debug: internal_stonith_action_execute: 	result = 0
Nov 13 13:45:34 [31460] vm2 stonith-ng: (  commands.c:781   )   trace: device_has_duplicate: 	No match for F1
Nov 13 13:45:34 [31460] vm2 stonith-ng: (  commands.c:843   )  notice: stonith_device_register: 	Added 'F1' to the device list (1 active devices)
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.7.1
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="6" admin_epoch="0" num_updates="1"/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="7" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.8.1
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="7" admin_epoch="0" num_updates="1"/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.1
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.2 761ef3207c9a00ca3a190046e551df6b
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="1">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261519">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       <lrm id="3232261519">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <lrm_resources/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       </lrm>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="8" num_updates="2" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.2
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.3 70b3476c40002d2b8afe79070f45ed65
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="2"/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="3" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261519">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.3
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.4 6223769ad880e2cfd731d4ae34ea4603
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="3">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261517">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       <lrm id="3232261517">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <lrm_resources/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       </lrm>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="8" num_updates="4" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.4
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.5 ec848f93df58e6ea8292c890ceeba4d9
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="4"/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="5" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261517">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.5
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.6 b9a94f1abf0121139641067408b3dbe0
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="5">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261518">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       <lrm id="3232261518">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <lrm_resources/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       </lrm>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="8" num_updates="6" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.6
Nov 13 13:45:34 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.7 8afde5af943ecd2a85a00f392002038c
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="6"/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="7" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261518">
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:34 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1072  )    info: process_lrmd_get_rsc_info: 	Resource 'F1' not found (0 active resources)
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (a8bb49c2-413e-47b0-9c1f-7223841526a5) with msg id 28
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from a8bb49c2-413e-47b0-9c1f-7223841526a5: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1047  )    info: process_lrmd_rsc_register: 	Added 'F1' to the rsc list (1 active resources)
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_register operation from a8bb49c2-413e-47b0-9c1f-7223841526a5: rc=0, reply=1, notify=1, exit=4201920
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (a8bb49c2-413e-47b0-9c1f-7223841526a5) with msg id 29
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (a8bb49c2-413e-47b0-9c1f-7223841526a5)
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (a8bb49c2-413e-47b0-9c1f-7223841526a5) with msg id 30
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from a8bb49c2-413e-47b0-9c1f-7223841526a5: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [31464] vm2       crmd: (       lrm.c:1784  )    info: do_lrm_rsc_op: 	Performing key=7:1:7:154fb289-24e8-407e-9a03-69a510480b60 op=F1_monitor_0
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from a8bb49c2-413e-47b0-9c1f-7223841526a5: rc=5, reply=1, notify=0, exit=4201920
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (a8bb49c2-413e-47b0-9c1f-7223841526a5) with msg id 31
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:122   )   debug: log_execute: 	executing - rsc:F1 action:monitor call_id:5
Nov 13 13:45:35 [31460] vm2 stonith-ng: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x103cfc0 for uid=0 gid=0 pid=31461 id=5ac4cd86-62c4-4dd0-83b0-0320ceb1dac6
Nov 13 13:45:35 [31460] vm2 stonith-ng: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (31460-31461-10)
Nov 13 13:45:35 [31460] vm2 stonith-ng: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [31461]
Nov 13 13:45:35 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [31460] vm2 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [31461] vm2       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [31461] vm2       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [31461] vm2       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [31460] vm2 stonith-ng: (      main.c:87    )   trace: st_ipc_created: 	Connection created for 0x103cfc0
Nov 13 13:45:35 [31460] vm2 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 1 from lrmd.31461
Nov 13 13:45:35 [31460] vm2 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command t="stonith-ng" st_op="register" st_clientname="lrmd.31461" st_clientid="5ac4cd86-62c4-4dd0-83b0-0320ceb1dac6" st_clientnode="vm2"/>
Nov 13 13:45:35 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing register 1 from lrmd.31461 (               0)
Nov 13 13:45:35 [31461] vm2       lrmd: ( st_client.c:1639  )   debug: stonith_api_signon: 	Connection to STONITH successful
Nov 13 13:45:35 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed register from lrmd.31461: OK (0)
Nov 13 13:45:35 [31460] vm2 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 2 from lrmd.31461
Nov 13 13:45:35 [31460] vm2 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_set_notification st_op="st_notify" st_notify_activate="st_notify_disconnect" st_clientid="5ac4cd86-62c4-4dd0-83b0-0320ceb1dac6" st_clientname="lrmd.31461" st_clientnode="vm2"/>
Nov 13 13:45:35 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify 2 from lrmd.31461 (               0)
Nov 13 13:45:35 [31460] vm2 stonith-ng: (  commands.c:1822  )   debug: handle_request: 	Setting st_notify_disconnect callbacks for lrmd.31461 (5ac4cd86-62c4-4dd0-83b0-0320ceb1dac6): ON
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:F1 action:monitor call_id:5  exit-code:7 exec-time:13ms queue-time:0ms
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (a8bb49c2-413e-47b0-9c1f-7223841526a5)
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1072  )    info: process_lrmd_get_rsc_info: 	Resource 'pDummy' not found (1 active resources)
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (a8bb49c2-413e-47b0-9c1f-7223841526a5) with msg id 32
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from a8bb49c2-413e-47b0-9c1f-7223841526a5: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1047  )    info: process_lrmd_rsc_register: 	Added 'pDummy' to the rsc list (2 active resources)
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_register operation from a8bb49c2-413e-47b0-9c1f-7223841526a5: rc=0, reply=1, notify=1, exit=4201920
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (a8bb49c2-413e-47b0-9c1f-7223841526a5) with msg id 33
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (a8bb49c2-413e-47b0-9c1f-7223841526a5)
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (a8bb49c2-413e-47b0-9c1f-7223841526a5) with msg id 34
Nov 13 13:45:35 [31464] vm2       crmd: (       lrm.c:1784  )    info: do_lrm_rsc_op: 	Performing key=8:1:7:154fb289-24e8-407e-9a03-69a510480b60 op=pDummy_monitor_0
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from a8bb49c2-413e-47b0-9c1f-7223841526a5: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from a8bb49c2-413e-47b0-9c1f-7223841526a5: rc=9, reply=1, notify=0, exit=4201920
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (a8bb49c2-413e-47b0-9c1f-7223841526a5) with msg id 35
Nov 13 13:45:35 [31464] vm2       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource F1 after monitor op complete (interval=0)
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:122   )   debug: log_execute: 	executing - rsc:pDummy action:monitor call_id:9
Nov 13 13:45:35 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify from lrmd.31461: OK (0)
Dummy(pDummy)[31485]:	2013/11/13_13:45:35 DEBUG: pDummy monitor : 7
Nov 13 13:45:35 [31461] vm2       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_0:31485 - exited with rc=7
Nov 13 13:45:35 [31461] vm2       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_0:31485:stderr [ -- empty -- ]
Nov 13 13:45:35 [31461] vm2       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_0:31485:stdout [ -- empty -- ]
Nov 13 13:45:35 [31461] vm2       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:9 pid:31485 exit-code:7 exec-time:120ms queue-time:2ms
Nov 13 13:45:35 [31461] vm2       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (a8bb49c2-413e-47b0-9c1f-7223841526a5)
Nov 13 13:45:36 [31459] vm2        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/crmd/17)
Nov 13 13:45:36 [31464] vm2       crmd: (       lrm.c:2101  )    info: process_lrm_event: 	LRM operation F1_monitor_0 (call=5, rc=7, cib-update=17, confirmed=true) not running
Nov 13 13:45:36 [31464] vm2       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'F1' with monitor op
Nov 13 13:45:36 [31464] vm2       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource pDummy after monitor op complete (interval=0)
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.7
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.8 26ed7a0b48fc4ae623a5aee9a3d14dcf
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="7"/>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="8" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261518">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="7:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;7:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="13" queue-time="0" op-digest="288
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/17, version=0.8.8)
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.8
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.9 a620ba287b7786990c988e5680eea772
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="8"/>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="9" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="4:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;4:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="23" queue-time="0" op-digest="288
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/61, version=0.8.9)
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.9
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.10 1bf63487cfb2465e5f9305b2b310410c
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="9"/>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="10" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="10:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;10:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="15" queue-time="0" op-digest="2
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/17, version=0.8.10)
Nov 13 13:45:36 [31464] vm2       crmd: (services_lin:604   )    info: services_os_action_execute: 	Managed Dummy_meta-data_0 process 31506 exited with rc=0
Nov 13 13:45:36 [31464] vm2       crmd: (       lrm.c:565   )   debug: get_rsc_restart_list: 	Attr state is not reloadable
Nov 13 13:45:36 [31464] vm2       crmd: (       lrm.c:565   )   debug: get_rsc_restart_list: 	Attr op_sleep is not reloadable
Nov 13 13:45:36 [31459] vm2        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/crmd/18)
Nov 13 13:45:36 [31464] vm2       crmd: (       lrm.c:2101  )  notice: process_lrm_event: 	LRM operation pDummy_monitor_0 (call=9, rc=7, cib-update=18, confirmed=true) not running
Nov 13 13:45:36 [31464] vm2       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'pDummy' with monitor op
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.10
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.11 66f83af99c165cdf0a74a520b9474f1b
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="10"/>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="11" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261518">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="8:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;8:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="120" queue-time="2" op-di
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/18, version=0.8.11)
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.11
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.12 69b87249349fec166963d00574c1e8d9
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="11"/>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="12" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="5:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;5:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="121" queue-time="2" op-di
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/62, version=0.8.12)
Nov 13 13:45:36 [31464] vm2       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: probe_complete=true for vm2
Nov 13 13:45:36 [31462] vm2      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 31464 (0x874000)
Nov 13 13:45:36 [31462] vm2      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="probe_complete" attr_value="true" attr_section="status" attr_host="vm2" attr_is_remote="0"/>
Nov 13 13:45:36 [31462] vm2      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting probe_complete[vm2] = true (writer)
Nov 13 13:45:36 [31462] vm2      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged probe_complete[vm1] from vm1 is true
Nov 13 13:45:36 [31462] vm2      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged probe_complete[vm2] from vm2 is true
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.12
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.13 92a008f0ef62d500c66d49ae54262e4e
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="12"/>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="13" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="11:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;11:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="130" queue-time="2" op-
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/18, version=0.8.13)
Nov 13 13:45:36 [31462] vm2      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged probe_complete[vm3] from vm3 is true
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.13
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.14 5fa7d2824d05d801e557350c4ebe869b
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="13">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261519">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        <lrm id="3232261519">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          <lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            <lrm_resource id="pDummy">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--             <lrm_rsc_op operation_key="pDummy_monitor_0" operation="monitor" transition-key="11:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;11:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" last-run="1384317935" last-rc-change="1384317935" exec-time="130" queue-time="2" id="pDummy_last_0"/>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            </lrm_resource>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          </lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        </lrm>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="14" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="14:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;14:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="10" rc-code="0" op-status="0" interval="0" last-run="1384317936" last-rc-change="1384317936" exec-time="51" queue-time="0" op-dige
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/19, version=0.8.14)
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.14
Nov 13 13:45:36 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.15 c644ff6f35c372b7784ca430760c5a21
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="14"/>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="15" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_monitor_10000" operation_key="pDummy_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="15:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;15:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="11" rc-code="0" op-status="0" interval="10000" last-rc-change="1384317936" exec-time="48" queue-time="1" op-digest=
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/20, version=0.8.15)
Nov 13 13:45:38 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.15
Nov 13 13:45:38 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.16 d2057dbbd6d1a45d7f5bc3432ef649f3
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="15">
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261517">
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        <lrm id="3232261517">
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          <lrm_resources>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            <lrm_resource id="F1">
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--             <lrm_rsc_op operation_key="F1_monitor_0" operation="monitor" transition-key="4:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;4:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" last-run="1384317935" last-rc-change="1384317935" exec-time="23" id="F1_last_0"/>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            </lrm_resource>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          </lrm_resources>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        </lrm>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="16" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="12:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;12:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="10" rc-code="0" op-status="0" interval="0" last-run="1384317936" last-rc-change="1384317936" exec-time="2305" queue-time="0" op-digest="28
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:38 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:38 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/63, version=0.8.16)
Nov 13 13:45:39 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.16
Nov 13 13:45:39 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.17 a97e80b9595cae69da19fce0899b09d9
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="16"/>
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="17" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_monitor_3600000" operation_key="F1_monitor_3600000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="13:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;13:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="11" rc-code="0" op-status="0" interval="3600000" last-rc-change="1384317938" exec-time="1312" queue-time="0" op-digest=
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:39 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:39 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/64, version=0.8.17)
Nov 13 13:45:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 7 ticks in 30s is 0.002333 (@100 tps)
Nov 13 13:45:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.030000 (full: 0.03 0.06 0.02 1/108 31529)
Nov 13 13:45:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:46:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:46:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.020000 (full: 0.02 0.05 0.01 1/108 31529)
Nov 13 13:46:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:46:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:46:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.010000 (full: 0.01 0.04 0.01 1/108 31536)
Nov 13 13:46:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute fail-count-pDummy with no delay
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting fail-count-pDummy[vm3] to 1 from vm1
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm3[fail-count-pDummy]=1 (3232261519 3232261519 3232261519 vm3)
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   <status>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261519">
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261519">
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261519">
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261519-fail-count-pDummy" name="fail-count-pDummy" value="1"/>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   </status>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:720   )  notice: write_attribute: 	Sent update 6 with 1 changes for fail-count-pDummy, id=<n/a>, set=(null)
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute last-failure-pDummy with no delay
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting last-failure-pDummy[vm3] to 1384318026 from vm1
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:683   )   trace: write_attribute: 	Updating value's nodeid
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:697   )   debug: write_attribute: 	Update: vm3[last-failure-pDummy]=1384318026 (3232261519 3232261519 3232261519 vm3)
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   <status>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     <node_state id="3232261519">
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       <transient_attributes id="3232261519">
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         <instance_attributes id="status-3232261519">
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute           <nvpair id="status-3232261519-last-failure-pDummy" name="last-failure-pDummy" value="1384318026"/>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute         </instance_attributes>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute       </transient_attributes>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute     </node_state>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:714   )   trace: write_attribute: 	write_attribute   </status>
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:720   )  notice: write_attribute: 	Sent update 7 with 1 changes for last-failure-pDummy, id=<n/a>, set=(null)
Nov 13 13:47:06 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.17
Nov 13 13:47:06 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.18 a51e1a3b91717c93641fe986a68f690b
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="17"/>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="18" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_failure_0" operation_key="pDummy_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="15:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;15:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="11" rc-code="7" op-status="0" interval="10000" last-rc-change="1384318026" exec-time="0" queue-time="0" op-digest=
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:47:06 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/21, version=0.8.18)
Nov 13 13:47:06 [31459] vm2        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/attrd/6)
Nov 13 13:47:06 [31459] vm2        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/attrd/7)
Nov 13 13:47:06 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.18
Nov 13 13:47:06 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.19 5197717035f45f7bbfddb1efd89c2360
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="18"/>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="19" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261519">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261519">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261519-fail-count-pDummy" name="fail-count-pDummy" value="1"/>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:47:06 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/6, version=0.8.19)
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:548   )    info: attrd_cib_callback: 	Update 6 for fail-count-pDummy: OK (0)
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 6 for fail-count-pDummy[vm3]=1: OK (0)
Nov 13 13:47:06 [31460] vm2 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.19
Nov 13 13:47:06 [31460] vm2 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.20 20d147ac83adce3d53784ce1a7e6304d
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="19"/>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="20" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261519">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261519">
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261519-last-failure-pDummy" name="last-failure-pDummy" value="1384318026"/>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:47:06 [31460] vm2 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:47:06 [31459] vm2        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/7, version=0.8.20)
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:548   )    info: attrd_cib_callback: 	Update 7 for last-failure-pDummy: OK (0)
Nov 13 13:47:06 [31462] vm2      attrd: (  commands.c:552   )  notice: attrd_cib_callback: 	Update 7 for last-failure-pDummy[vm3]=1384318026: OK (0)
Nov 13 13:47:08 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_op="st_query" st_callid="2" st_callopt="0" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:08 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 696fb2c3-e11a-4124-ba9b-bafc9ab28426
Nov 13 13:47:08 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 696fb2c3-e11a-4124-ba9b-bafc9ab28426 - reboot of vm3 for crmd.15883
Nov 13 13:47:08 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_op="st_query" st_callid="2" st_callopt="0" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:288   )   debug: schedule_stonith_command: 	Scheduling list on F1 for stonith-ng (timeout=60s)
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:08 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action list for agent fence_legacy (target=(null))
Nov 13 13:47:08 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:08 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation list on F1 now running with pid=31557, timeout=60s
Nov 13 13:47:08 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31557 performing action 'list' exited with rc 0
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:749   )    info: dynamic_list_search_cb: 	Refreshing port list for F1
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:409   )   trace: parse_host_line: 	Processing 3 bytes: [vm3]
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding 'vm3'
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:409   )   trace: parse_host_line: 	Processing 11 bytes: [success:  0]
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding 'success'
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding '0'
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:480   )   trace: parse_host_list: 	Parsed 3 entries from 'vm3
success:  0
'
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:08 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_op="st_fence" st_callid="2" st_callopt="0" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="v
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_op="st_fence" st_callid="2" st_callopt="0" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (696fb2c3-e11a-4124-ba9b-bafc9ab28426) (timeout=60s)
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:47:08 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:47:08 [31460] vm2 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:47:08 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:08 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:08 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=31570, timeout=60s
Nov 13 13:47:09 vm2 stonith: [31571]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:09 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31570 performing action 'reboot' exited with rc 1
Nov 13 13:47:09 [31460] vm2 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:47:09 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:09 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:12 vm2 stonith: [31583]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:12 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31582 performing action 'reboot' exited with rc 1
Nov 13 13:47:12 [31460] vm2 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [31582] (call 2 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31582 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31582 [ failed: vm3 5 ]
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="2" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="1" src="vm1">
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:12 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@696fb2c3-e11a-4124-ba9b-bafc9ab28426.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:12 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.696fb2c3: Generic Pacemaker error
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:12 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=696fb2c3-e11a-4124-ba9b-bafc9ab28426) by client crmd.15883
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:47:12 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:12 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:14 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="431c7488-013e-4900-bde7-a3ce154b35a3" st_op="st_query" st_callid="3" st_callopt="0" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:14 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 431c7488-013e-4900-bde7-a3ce154b35a3
Nov 13 13:47:14 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 431c7488-013e-4900-bde7-a3ce154b35a3 - reboot of vm3 for crmd.15883
Nov 13 13:47:14 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="431c7488-013e-4900-bde7-a3ce154b35a3" st_op="st_query" st_callid="3" st_callopt="0" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:14 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="431c7488-013e-4900-bde7-a3ce154b35a3" st_op="st_fence" st_callid="3" st_callopt="0" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="v
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="431c7488-013e-4900-bde7-a3ce154b35a3" st_op="st_fence" st_callid="3" st_callopt="0" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (431c7488-013e-4900-bde7-a3ce154b35a3) (timeout=60s)
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:47:14 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:47:14 [31460] vm2 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:47:14 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:14 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:14 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=31594, timeout=60s
Nov 13 13:47:15 vm2 stonith: [31595]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:15 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31594 performing action 'reboot' exited with rc 1
Nov 13 13:47:15 [31460] vm2 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:47:15 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:15 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:17 vm2 stonith: [31607]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:17 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31606 performing action 'reboot' exited with rc 1
Nov 13 13:47:17 [31460] vm2 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [31606] (call 3 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31606 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31606 [ failed: vm3 5 ]
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="3" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="2" src="vm1">
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:17 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@431c7488-013e-4900-bde7-a3ce154b35a3.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:17 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.431c7488: Generic Pacemaker error
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:17 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=431c7488-013e-4900-bde7-a3ce154b35a3) by client crmd.15883
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:47:17 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:17 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:19 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="682bdc12-35a4-431a-8773-4862cc8c39ef" st_op="st_query" st_callid="4" st_callopt="0" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:19 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 682bdc12-35a4-431a-8773-4862cc8c39ef
Nov 13 13:47:19 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 682bdc12-35a4-431a-8773-4862cc8c39ef - reboot of vm3 for crmd.15883
Nov 13 13:47:19 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="682bdc12-35a4-431a-8773-4862cc8c39ef" st_op="st_query" st_callid="4" st_callopt="0" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:19 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="682bdc12-35a4-431a-8773-4862cc8c39ef" st_op="st_fence" st_callid="4" st_callopt="0" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="v
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="682bdc12-35a4-431a-8773-4862cc8c39ef" st_op="st_fence" st_callid="4" st_callopt="0" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (682bdc12-35a4-431a-8773-4862cc8c39ef) (timeout=60s)
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:47:19 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:47:19 [31460] vm2 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:47:19 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:19 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:19 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=31618, timeout=60s
Nov 13 13:47:20 vm2 stonith: [31619]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:20 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31618 performing action 'reboot' exited with rc 1
Nov 13 13:47:20 [31460] vm2 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:47:20 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:20 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 2 ticks in 30s is 0.000667 (@100 tps)
Nov 13 13:47:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.04 0.01 1/111 31641)
Nov 13 13:47:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:47:22 vm2 stonith: [31631]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:22 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31630 performing action 'reboot' exited with rc 1
Nov 13 13:47:22 [31460] vm2 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [31630] (call 4 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31630 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31630 [ failed: vm3 5 ]
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="4" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="3" src="vm1">
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:22 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@682bdc12-35a4-431a-8773-4862cc8c39ef.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:22 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.682bdc12: Generic Pacemaker error
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:22 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=682bdc12-35a4-431a-8773-4862cc8c39ef) by client crmd.15883
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:47:22 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:22 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:24 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_op="st_query" st_callid="5" st_callopt="0" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:24 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created d761e73f-f337-48cc-b2a1-5b2d722d2738
Nov 13 13:47:24 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: d761e73f-f337-48cc-b2a1-5b2d722d2738 - reboot of vm3 for crmd.15883
Nov 13 13:47:24 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_op="st_query" st_callid="5" st_callopt="0" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:24 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_op="st_fence" st_callid="5" st_callopt="0" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="v
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_op="st_fence" st_callid="5" st_callopt="0" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (d761e73f-f337-48cc-b2a1-5b2d722d2738) (timeout=60s)
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:47:24 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:47:24 [31460] vm2 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:47:24 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:24 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:24 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=31642, timeout=60s
Nov 13 13:47:25 vm2 stonith: [31643]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:25 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31642 performing action 'reboot' exited with rc 1
Nov 13 13:47:25 [31460] vm2 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:47:25 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:25 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:27 vm2 stonith: [31655]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:27 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31654 performing action 'reboot' exited with rc 1
Nov 13 13:47:27 [31460] vm2 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [31654] (call 5 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31654 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31654 [ failed: vm3 5 ]
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="5" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="4" src="vm1">
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:27 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@d761e73f-f337-48cc-b2a1-5b2d722d2738.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:27 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.d761e73f: Generic Pacemaker error
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:27 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=d761e73f-f337-48cc-b2a1-5b2d722d2738) by client crmd.15883
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:47:27 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:27 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:29 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_op="st_query" st_callid="6" st_callopt="0" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:29 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:29 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 11df91ab-fc81-43aa-941d-ffa1204df1c9
Nov 13 13:47:29 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 11df91ab-fc81-43aa-941d-ffa1204df1c9 - reboot of vm3 for crmd.15883
Nov 13 13:47:29 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:29 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_op="st_query" st_callid="6" st_callopt="0" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:29 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:29 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:29 [31460] vm2 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:29 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:29 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:29 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:29 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="5" src="vm1">
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm1" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:33 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:33 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@11df91ab-fc81-43aa-941d-ffa1204df1c9.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:33 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.11df91ab: Generic Pacemaker error
Nov 13 13:47:33 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:33 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:47:33 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:33 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:33 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=11df91ab-fc81-43aa-941d-ffa1204df1c9) by client crmd.15883
Nov 13 13:47:35 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="84777767-aa8b-4e04-8dec-b26dae36aaff" st_op="st_query" st_callid="7" st_callopt="0" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:35 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 84777767-aa8b-4e04-8dec-b26dae36aaff
Nov 13 13:47:35 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 84777767-aa8b-4e04-8dec-b26dae36aaff - reboot of vm3 for crmd.15883
Nov 13 13:47:35 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="84777767-aa8b-4e04-8dec-b26dae36aaff" st_op="st_query" st_callid="7" st_callopt="0" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:35 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="84777767-aa8b-4e04-8dec-b26dae36aaff" st_op="st_fence" st_callid="7" st_callopt="0" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="v
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="84777767-aa8b-4e04-8dec-b26dae36aaff" st_op="st_fence" st_callid="7" st_callopt="0" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (84777767-aa8b-4e04-8dec-b26dae36aaff) (timeout=60s)
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:47:35 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:47:35 [31460] vm2 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:47:35 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:35 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:35 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=31672, timeout=60s
Nov 13 13:47:36 vm2 stonith: [31673]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:36 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31672 performing action 'reboot' exited with rc 1
Nov 13 13:47:36 [31460] vm2 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:47:36 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:36 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:38 vm2 stonith: [31685]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:38 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31684 performing action 'reboot' exited with rc 1
Nov 13 13:47:38 [31460] vm2 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [31684] (call 7 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31684 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31684 [ failed: vm3 5 ]
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="7" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="6" src="vm1">
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:38 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@84777767-aa8b-4e04-8dec-b26dae36aaff.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:38 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.84777767: Generic Pacemaker error
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:38 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=84777767-aa8b-4e04-8dec-b26dae36aaff) by client crmd.15883
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:47:38 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:38 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:40 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_op="st_query" st_callid="8" st_callopt="0" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:40 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27
Nov 13 13:47:40 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27 - reboot of vm3 for crmd.15883
Nov 13 13:47:40 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_op="st_query" st_callid="8" st_callopt="0" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:40 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_op="st_fence" st_callid="8" st_callopt="0" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="v
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_op="st_fence" st_callid="8" st_callopt="0" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27) (timeout=60s)
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:47:40 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:47:40 [31460] vm2 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:47:40 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:40 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:40 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=31696, timeout=60s
Nov 13 13:47:41 vm2 stonith: [31697]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:41 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31696 performing action 'reboot' exited with rc 1
Nov 13 13:47:41 [31460] vm2 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:47:41 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:41 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:43 vm2 stonith: [31709]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:43 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31708 performing action 'reboot' exited with rc 1
Nov 13 13:47:43 [31460] vm2 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [31708] (call 8 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31708 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31708 [ failed: vm3 5 ]
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="8" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="7" src="vm1">
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:43 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:43 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.588ca7d3: Generic Pacemaker error
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:43 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27) by client crmd.15883
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:47:43 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:43 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:45 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_op="st_query" st_callid="9" st_callopt="0" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:45 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created a3379e0c-d206-4ced-9e7e-1c915f08a0ae
Nov 13 13:47:45 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: a3379e0c-d206-4ced-9e7e-1c915f08a0ae - reboot of vm3 for crmd.15883
Nov 13 13:47:45 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_op="st_query" st_callid="9" st_callopt="0" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:45 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_op="st_fence" st_callid="9" st_callopt="0" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="v
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_op="st_fence" st_callid="9" st_callopt="0" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (a3379e0c-d206-4ced-9e7e-1c915f08a0ae) (timeout=60s)
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:47:45 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:47:45 [31460] vm2 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:47:45 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:45 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:45 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=31720, timeout=60s
Nov 13 13:47:46 vm2 stonith: [31721]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:46 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31720 performing action 'reboot' exited with rc 1
Nov 13 13:47:46 [31460] vm2 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:47:46 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:46 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:48 vm2 stonith: [31733]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:48 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31732 performing action 'reboot' exited with rc 1
Nov 13 13:47:48 [31460] vm2 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [31732] (call 9 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31732 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31732 [ failed: vm3 5 ]
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="9" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="8" src="vm1">
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:48 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@a3379e0c-d206-4ced-9e7e-1c915f08a0ae.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:48 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.a3379e0c: Generic Pacemaker error
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:48 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=a3379e0c-d206-4ced-9e7e-1c915f08a0ae) by client crmd.15883
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:47:48 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:48 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:50 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_op="st_query" st_callid="10" st_callopt="0" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:50 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 9ab4c26b-da3e-40cd-ba98-c89017db4953
Nov 13 13:47:50 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 9ab4c26b-da3e-40cd-ba98-c89017db4953 - reboot of vm3 for crmd.15883
Nov 13 13:47:50 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_op="st_query" st_callid="10" st_callopt="0" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:50 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_op="st_fence" st_callid="10" st_callopt="0" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_op="st_fence" st_callid="10" st_callopt="0" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (9ab4c26b-da3e-40cd-ba98-c89017db4953) (timeout=60s)
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:47:50 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:47:50 [31460] vm2 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:47:50 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:50 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:50 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=31744, timeout=60s
Nov 13 13:47:51 vm2 stonith: [31745]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:51 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31744 performing action 'reboot' exited with rc 1
Nov 13 13:47:51 [31460] vm2 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:47:51 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:51 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:47:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.03 0.01 1/109 31756)
Nov 13 13:47:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:47:54 vm2 stonith: [31757]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:54 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31756 performing action 'reboot' exited with rc 1
Nov 13 13:47:54 [31460] vm2 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [31756] (call 10 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31756 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31756 [ failed: vm3 5 ]
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="10" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="9" src="vm1">
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:54 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@9ab4c26b-da3e-40cd-ba98-c89017db4953.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:54 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.9ab4c26b: Generic Pacemaker error
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:54 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=9ab4c26b-da3e-40cd-ba98-c89017db4953) by client crmd.15883
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:47:54 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:54 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:56 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_op="st_query" st_callid="11" st_callopt="0" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:56 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 1ba836f2-328d-45c7-adbb-1db9b0a1ca4c
Nov 13 13:47:56 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 1ba836f2-328d-45c7-adbb-1db9b0a1ca4c - reboot of vm3 for crmd.15883
Nov 13 13:47:56 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_op="st_query" st_callid="11" st_callopt="0" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:56 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_op="st_fence" st_callid="11" st_callopt="0" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_op="st_fence" st_callid="11" st_callopt="0" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (1ba836f2-328d-45c7-adbb-1db9b0a1ca4c) (timeout=60s)
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:47:56 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:47:56 [31460] vm2 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:47:56 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:56 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:56 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=31768, timeout=60s
Nov 13 13:47:57 vm2 stonith: [31769]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:57 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31768 performing action 'reboot' exited with rc 1
Nov 13 13:47:57 [31460] vm2 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:47:57 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:57 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:59 vm2 stonith: [31781]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:59 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31780 performing action 'reboot' exited with rc 1
Nov 13 13:47:59 [31460] vm2 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [31780] (call 11 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31780 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31780 [ failed: vm3 5 ]
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="11" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="10" src="vm1">
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:59 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@1ba836f2-328d-45c7-adbb-1db9b0a1ca4c.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:59 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.1ba836f2: Generic Pacemaker error
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:59 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=1ba836f2-328d-45c7-adbb-1db9b0a1ca4c) by client crmd.15883
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:47:59 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:59 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:48:01 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_op="st_query" st_callid="12" st_callopt="0" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:48:01 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 00825b71-24e3-4f14-a0b8-6945f050dfd1
Nov 13 13:48:01 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 00825b71-24e3-4f14-a0b8-6945f050dfd1 - reboot of vm3 for crmd.15883
Nov 13 13:48:01 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_op="st_query" st_callid="12" st_callopt="0" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:48:01 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_op="st_fence" st_callid="12" st_callopt="0" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_op="st_fence" st_callid="12" st_callopt="0" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (00825b71-24e3-4f14-a0b8-6945f050dfd1) (timeout=60s)
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:48:01 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:48:01 [31460] vm2 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:48:01 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:48:01 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:48:01 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=31792, timeout=60s
Nov 13 13:48:02 vm2 stonith: [31793]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:48:02 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31792 performing action 'reboot' exited with rc 1
Nov 13 13:48:02 [31460] vm2 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:48:02 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:48:02 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:48:04 vm2 stonith: [31825]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:48:04 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 31823 performing action 'reboot' exited with rc 1
Nov 13 13:48:04 [31460] vm2 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [31823] (call 12 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31823 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:31823 [ failed: vm3 5 ]
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="12" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="11" src="vm1">
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:48:04 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@00825b71-24e3-4f14-a0b8-6945f050dfd1.vm1: Generic Pacemaker error (-201)
Nov 13 13:48:04 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.00825b71: Generic Pacemaker error
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:48:04 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=00825b71-24e3-4f14-a0b8-6945f050dfd1) by client crmd.15883
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 13:48:04 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:48:04 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:48:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:48:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.03 0.00 1/108 31855)
Nov 13 13:48:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:48:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:48:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.03 0.00 1/108 31878)
Nov 13 13:48:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:49:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:49:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.02 0.00 1/108 31879)
Nov 13 13:49:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:49:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:49:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.02 0.00 1/108 31886)
Nov 13 13:49:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:50:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:50:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.02 0.00 1/108 31888)
Nov 13 13:50:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:50:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:50:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.01 0.00 1/108 31896)
Nov 13 13:50:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:51:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:51:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.01 0.00 1/108 31896)
Nov 13 13:51:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:51:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:51:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.01 0.00 1/108 31903)
Nov 13 13:51:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:52:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:52:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31904)
Nov 13 13:52:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:52:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:52:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31910)
Nov 13 13:52:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:53:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:53:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31912)
Nov 13 13:53:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:53:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:53:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31918)
Nov 13 13:53:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:54:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:54:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31919)
Nov 13 13:54:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:54:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:54:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31926)
Nov 13 13:54:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:55:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:55:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31926)
Nov 13 13:55:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:55:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:55:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31934)
Nov 13 13:55:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:56:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:56:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31934)
Nov 13 13:56:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:56:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:56:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31941)
Nov 13 13:56:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:57:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:57:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31942)
Nov 13 13:57:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:57:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:57:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31948)
Nov 13 13:57:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:58:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:58:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31950)
Nov 13 13:58:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:58:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:58:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31956)
Nov 13 13:58:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:59:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:59:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31957)
Nov 13 13:59:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:59:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:59:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31964)
Nov 13 13:59:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:00:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:00:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31966)
Nov 13 14:00:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:00:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:00:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31974)
Nov 13 14:00:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:01:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:01:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31985)
Nov 13 14:01:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:01:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:01:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31992)
Nov 13 14:01:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:02:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:02:21 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31993)
Nov 13 14:02:21 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:02:51 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:02:51 [31464] vm2       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 31999)
Nov 13 14:02:51 [31464] vm2       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:03:04 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_op="st_query" st_callid="13" st_callopt="0" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 14:03:04 [31460] vm2 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b
Nov 13 14:03:04 [31460] vm2 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b - reboot of vm3 for crmd.15883
Nov 13 14:03:04 [31460] vm2 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_op="st_query" st_callid="13" st_callopt="0" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:288   )   debug: schedule_stonith_command: 	Scheduling list on F1 for stonith-ng (timeout=60s)
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 14:03:04 [31460] vm2 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action list for agent fence_legacy (target=(null))
Nov 13 14:03:04 [31460] vm2 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 14:03:04 [31460] vm2 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation list on F1 now running with pid=32001, timeout=60s
Nov 13 14:03:04 [31460] vm2 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 32001 performing action 'list' exited with rc 0
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:749   )    info: dynamic_list_search_cb: 	Refreshing port list for F1
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:409   )   trace: parse_host_line: 	Processing 3 bytes: [vm3]
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding 'vm3'
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:409   )   trace: parse_host_line: 	Processing 11 bytes: [success:  0]
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding 'success'
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding '0'
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:480   )   trace: parse_host_list: 	Parsed 3 entries from 'vm3
success:  0
'
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 14:03:04 [31460] vm2 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="12" src="vm1">
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm1" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 14:03:07 [31460] vm2 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 14:03:07 [31460] vm2 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b.vm1: Generic Pacemaker error (-201)
Nov 13 14:03:07 [31460] vm2 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.893bcd8c: Generic Pacemaker error
Nov 13 14:03:07 [31460] vm2 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 14:03:07 [31460] vm2 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 14:03:07 [31464] vm2       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b) by client crmd.15883
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.31464.d20e14
Nov 13 14:03:07 [31460] vm2 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 14:03:07 [31460] vm2 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 14:03:21 [31464] vm2       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
