Nov 13 13:44:17 [15863] vm1 corosync notice  [MAIN  ] main.c:main:1171 Corosync Cluster Engine ('2.3.2.7-a911'): started and ready to provide service.
Nov 13 13:44:17 [15863] vm1 corosync info    [MAIN  ] main.c:main:1172 Corosync built-in features: watchdog upstart snmp pie relro bindnow
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:901 Token Timeout (1000 ms) retransmit timeout (238 ms)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:904 token hold (180 ms) retransmits before loss (4 retrans)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:911 join (50 ms) send_join (0 ms) consensus (1200 ms) merge (200 ms)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:914 downcheck (1000 ms) fail to recv const (2500 msgs)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:916 seqno unchanged const (30 rotations) Maximum network MTU 1401
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:920 window size per rotation (50 messages) maximum messages per rotation (17 messages)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:924 missed count const (5 messages)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:927 send threads (0 threads)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:930 RRP token expired timeout (238 ms)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:933 RRP token problem counter (10000 ms)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:936 RRP threshold (10 problem count)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:939 RRP multicast threshold (100 problem count)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:942 RRP automatic recovery check timeout (1000 ms)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:944 RRP mode set to active.
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:947 heartbeat_failures_allowed (0)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:949 max_network_delay (50 ms)
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:972 HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Nov 13 13:44:17 [15863] vm1 corosync notice  [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Nov 13 13:44:17 [15863] vm1 corosync notice  [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Nov 13 13:44:17 [15863] vm1 corosync notice  [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Nov 13 13:44:17 [15863] vm1 corosync notice  [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:905 Receive multicast socket recv buffer size (320000 bytes).
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:911 Transmit multicast socket send buffer size (320000 bytes).
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:917 Local receive multicast loop socket recv buffer size (320000 bytes).
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:923 Local transmit multicast loop socket send buffer size (320000 bytes).
Nov 13 13:44:17 [15863] vm1 corosync notice  [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.101.141] is now up.
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:main_iface_change_fn:4637 Created or loaded sequence id 0.192.168.101.141 for this ring.
Nov 13 13:44:17 [15863] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration map access [0]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cmap [0]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:17 [15863] vm1 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cmap
Nov 13 13:44:17 [15863] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration service [1]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cfg [1]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:17 [15863] vm1 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cfg
Nov 13 13:44:17 [15863] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster closed process group service v1.01 [2]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cpg [2]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:17 [15863] vm1 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cpg
Nov 13 13:44:17 [15863] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync profile loading service [4]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:851 NOT Initializing IPC on pload [4]
Nov 13 13:44:17 [15863] vm1 corosync warning [WD    ] wd.c:setup_watchdog:631 No Watchdog, try modprobe <a watchdog>
Nov 13 13:44:17 [15863] vm1 corosync info    [WD    ] wd.c:wd_scan_resources:580 no resources configured.
Nov 13 13:44:17 [15863] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync watchdog service [7]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:851 NOT Initializing IPC on wd [7]
Nov 13 13:44:17 [15863] vm1 corosync notice  [QUORUM] vsf_quorum.c:quorum_exec_init_fn:274 Using quorum provider corosync_votequorum
Nov 13 13:44:17 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:votequorum_readconfig:967 Reading configuration (runtime: 0)
Nov 13 13:44:17 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:votequorum_read_nodelist_configuration:886 No nodelist defined or our node is not in the nodelist
Nov 13 13:44:17 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:17 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:17 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:17 [15863] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync vote quorum service v1.0 [5]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on votequorum [5]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:17 [15863] vm1 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: votequorum
Nov 13 13:44:17 [15863] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster quorum service v0.1 [3]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on quorum [3]
Nov 13 13:44:17 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:17 [15863] vm1 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: quorum
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:905 Receive multicast socket recv buffer size (320000 bytes).
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:911 Transmit multicast socket send buffer size (320000 bytes).
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:917 Local receive multicast loop socket recv buffer size (320000 bytes).
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:923 Local transmit multicast loop socket send buffer size (320000 bytes).
Nov 13 13:44:17 [15863] vm1 corosync notice  [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.102.141] is now up.
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2087 entering GATHER state from 15(interface change).
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_token_create:3138 Creating commit token because I am the rep.
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_save:1550 Saving state aru 0 high seq received 0
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3383 Storing new sequence id for ring 4
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2135 entering COMMIT state.
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2172 entering RECOVERY state.
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [0] member 192.168.101.141:
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 0 rep 192.168.101.141
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru 0 high delivered 0 received flag 1
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2326 Did not need to originate any messages in recovery.
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Nov 13 13:44:17 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3829 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_reset:1566 Resetting old ring state
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:deliver_messages_from_recovery_to_regular:1772 recovery to regular 1-0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:18 [15863] vm1 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) 
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_operational_enter:2010 entering OPERATIONAL state.
Nov 13 13:44:18 [15863] vm1 corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.141:4) was formed. Members joined: -1062705779
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261517]: votes: 1, expected: 3 flags: 8
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync configuration map access
Nov 13 13:44:18 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_sync_activate:386 Single node sync -> no action
Nov 13 13:44:18 [15863] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:0 left:0)
Nov 13 13:44:18 [15863] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 chosen downlist: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:0 left:0)
Nov 13 13:44:18 [15863] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync cluster closed process group service v1.01
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261517]: votes: 1, expected: 3 flags: 8
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync vote quorum service v1.0
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync notice  [QUORUM] vsf_quorum.c:log_view_list:132 Members[1]: -1062705779
Nov 13 13:44:18 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to (nil), length = 52
Nov 13 13:44:18 [15863] vm1 corosync notice  [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15870-25)
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15870]
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:18 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15870-25)
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15870-25) state:2
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:18 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:18 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-response-15865-15870-25-header
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-event-15865-15870-25-header
Nov 13 13:44:18 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-request-15865-15870-25-header
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2087 entering GATHER state from 11(merge during join).
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_token_create:3138 Creating commit token because I am the rep.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_save:1550 Saving state aru 6 high seq received 6
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3383 Storing new sequence id for ring 8
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2135 entering COMMIT state.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2172 entering RECOVERY state.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2214 TRANS [0] member 192.168.101.141:
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [0] member 192.168.101.141:
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 4 rep 192.168.101.141
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru 6 high delivered 6 received flag 1
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [1] member 192.168.101.142:
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 4 rep 192.168.101.142
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru 6 high delivered 6 received flag 1
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2326 Did not need to originate any messages in recovery.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3829 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_reset:1566 Resetting old ring state
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:deliver_messages_from_recovery_to_regular:1772 recovery to regular 1-0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:18 [15863] vm1 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) 
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_operational_enter:2010 entering OPERATIONAL state.
Nov 13 13:44:18 [15863] vm1 corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.141:8) was formed. Members joined: -1062705778
Nov 13 13:44:18 [15863] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync configuration map access
Nov 13 13:44:18 [15863] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:1 left:0)
Nov 13 13:44:18 [15863] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ; members(old:1 left:0)
Nov 13 13:44:18 [15863] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 chosen downlist: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:1 left:0)
Nov 13 13:44:18 [15863] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync cluster closed process group service v1.01
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261517]: votes: 1, expected: 3 flags: 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261518]: votes: 1, expected: 3 flags: 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=2, expected_votes=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705779
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:are_we_quorate:777 quorum regained, resuming activity
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync vote quorum service v1.0
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=2, expected_votes=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705779
Nov 13 13:44:18 [15863] vm1 corosync notice  [QUORUM] vsf_quorum.c:quorum_api_set_quorum:148 This node is within the primary component and will provide service.
Nov 13 13:44:18 [15863] vm1 corosync notice  [QUORUM] vsf_quorum.c:log_view_list:132 Members[2]: -1062705779 -1062705778
Nov 13 13:44:18 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to (nil), length = 56
Nov 13 13:44:18 [15863] vm1 corosync notice  [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2087 entering GATHER state from 11(merge during join).
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_token_create:3138 Creating commit token because I am the rep.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_save:1550 Saving state aru a high seq received a
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3383 Storing new sequence id for ring c
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2135 entering COMMIT state.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2172 entering RECOVERY state.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2214 TRANS [0] member 192.168.101.141:
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2214 TRANS [1] member 192.168.101.142:
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [0] member 192.168.101.141:
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 8 rep 192.168.101.141
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru a high delivered a received flag 1
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [1] member 192.168.101.142:
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 8 rep 192.168.101.141
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru a high delivered a received flag 1
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [2] member 192.168.101.143:
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 4 rep 192.168.101.143
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru 6 high delivered 6 received flag 1
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2326 Did not need to originate any messages in recovery.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3829 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_reset:1566 Resetting old ring state
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:deliver_messages_from_recovery_to_regular:1772 recovery to regular 1-0
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:18 [15863] vm1 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) 
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_operational_enter:2010 entering OPERATIONAL state.
Nov 13 13:44:18 [15863] vm1 corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.141:12) was formed. Members joined: -1062705777
Nov 13 13:44:18 [15863] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync configuration map access
Nov 13 13:44:18 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_sync_activate:394 Not first sync -> no action
Nov 13 13:44:18 [15863] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ; members(old:2 left:0)
Nov 13 13:44:18 [15863] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:2 left:0)
Nov 13 13:44:18 [15863] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ; members(old:1 left:0)
Nov 13 13:44:18 [15863] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 chosen downlist: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:2 left:0)
Nov 13 13:44:18 [15863] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync cluster closed process group service v1.01
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261518]: votes: 1, expected: 3 flags: 1
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=2, expected_votes=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705779
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261519]: votes: 1, expected: 3 flags: 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705779
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261517]: votes: 1, expected: 3 flags: 1
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705779
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [15863] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync vote quorum service v1.0
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [15863] vm1 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705779
Nov 13 13:44:18 [15863] vm1 corosync notice  [QUORUM] vsf_quorum.c:log_view_list:132 Members[3]: -1062705779 -1062705778 -1062705777
Nov 13 13:44:18 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to (nil), length = 60
Nov 13 13:44:18 [15863] vm1 corosync notice  [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Nov 13 13:44:18 [15863] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 0
Nov 13 13:44:20 [15874] vm1 pacemakerd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:896   )   debug: main: 	Checking for old instances of pacemakerd
Nov 13 13:44:20 [15874] vm1 pacemakerd: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish pacemakerd connection: Connection refused (111)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15874-25)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15874]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (   cluster.c:526   )   debug: get_cluster_type: 	Testing with Corosync
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa787d50ae0
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15874-26)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15874]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-26-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-26-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-26-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (   cluster.c:573   )    info: get_cluster_type: 	Detected an active 'corosync' cluster
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:326   )    info: mcp_read_config: 	Reading configure for stack: corosync
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa787d52310
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15874-26)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15874-26) state:2
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa787d52310
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-26-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-26-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-26-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:426   )  notice: mcp_read_config: 	Configured corosync to accept connections from group 492: OK (1)
Nov 13 13:44:20 [15874] vm1 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-25-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-25-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-25-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (   logging.c:314   )  notice: crm_add_logfile: 	Additional logging available in /var/log/ha-debug
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:931   )  notice: main: 	Starting Pacemaker 1.1.10 (Build: 2383f6c):  ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:931   )  notice: main: 	Starting Pacemaker 1.1.10 (Build: 2383f6c):  ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:941   )    info: main: 	Maximum core file size is: 18446744073709551615
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:941   )    info: main: 	Maximum core file size is: 18446744073709551615
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: pacemakerd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: pacemakerd
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15874-25)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15874-25) state:2
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa787d50ae0
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-25-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-25-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-25-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15874-25)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15874]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:142   )   debug: cluster_connect_cfg: 	Our nodeid: -1062705779
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:142   )   debug: cluster_connect_cfg: 	Our nodeid: -1062705779
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15874-26)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15874]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7fa787d517a0, cpd=0x7fa787d51ef4
Nov 13 13:44:20 [15874] vm1 pacemakerd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261517
Nov 13 13:44:20 [15874] vm1 pacemakerd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261517
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 47e23e4e-8d89-44d7-ade8-b727d086f719/0xff9030 for node (null)/3232261517 (1 total)
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 47e23e4e-8d89-44d7-ade8-b727d086f719/0xff9030 for node (null)/3232261517 (1 total)
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:255   )   debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:255   )   debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15874-27)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15874]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:quorum_lib_init_fn:316 lib_init_fn: conn=0x7fa78815b060
Nov 13 13:44:20 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_gettype:471 got quorum_type request on 0x7fa78815b060
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15874
Nov 13 13:44:20 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_getquorate:395 got quorate request on 0x7fa78815b060
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:273   )  notice: cluster_connect_quorum: 	Quorum acquired
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:273   )  notice: cluster_connect_quorum: 	Quorum acquired
Nov 13 13:44:20 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:412 got trackstart request on 0x7fa78815b060
Nov 13 13:44:20 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:420 sending initial status to 0x7fa78815b060
Nov 13 13:44:20 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to 0x7fa78815b060, length = 60
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15874-28)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15874]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa78825f500
Nov 13 13:44:20 [15874] vm1 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15874] vm1 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-28-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15874-28)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15874-28) state:2
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa78825f500
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-28-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-28-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-28-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15874-28)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15874]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa788360010
Nov 13 13:44:20 [15874] vm1 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15874] vm1 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:20 [15874] vm1 pacemakerd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [15874] vm1 pacemakerd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15874-28)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15874-28) state:2
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa788360010
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-28-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-28-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process cib
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process cib
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15878 for process cib
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15878 for process cib
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000004000000)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000004000000)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15879 for process stonith-ng
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15879 for process stonith-ng
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15880 for process lrmd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15880 for process lrmd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process attrd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process attrd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15881 for process attrd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15881 for process attrd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process pengine
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process pengine
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15882 for process pengine
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15882 for process pengine
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process crmd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process crmd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15883 for process crmd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 15883 for process crmd
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:1023  )    info: main: 	Starting mainloop
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:1023  )    info: main: 	Starting mainloop
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:191   )    info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:191   )    info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm1[3232261517] - state is now member (was (null))
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm1[3232261517] - state is now member (was (null))
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 4b1694e5-d501-4f98-a4b3-82e2729ae396/0x10fa8e0 for node (null)/3232261518 (2 total)
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 4b1694e5-d501-4f98-a4b3-82e2729ae396/0x10fa8e0 for node (null)/3232261518 (2 total)
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261518
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261518
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15874-28)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15874]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15878] vm1        cib: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 [15881] vm1      attrd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 [15881] vm1      attrd: (      main.c:307   )    info: main: 	Starting up
Nov 13 13:44:20 [15881] vm1      attrd: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [15881] vm1      attrd: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [15881] vm1      attrd: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:20 [15880] vm1       lrmd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 [15878] vm1        cib: (      main.c:230   )  notice: main: 	Using new config location: /var/lib/pacemaker/cib
Nov 13 13:44:20 [15878] vm1        cib: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [15878] vm1        cib: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [15878] vm1        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
Nov 13 13:44:20 [15878] vm1        cib: (        io.c:262   ) warning: retrieveCib: 	Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml
Nov 13 13:44:20 [15878] vm1        cib: (        io.c:380   ) warning: readCibXmlFile: 	Primary configuration corrupt or unusable, trying backups in /var/lib/pacemaker/cib
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa788360010
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31455
Nov 13 13:44:20 [15879] vm1 stonith-ng: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 [15879] vm1 stonith-ng: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [15879] vm1 stonith-ng: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [15879] vm1 stonith-ng: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:20 [15880] vm1       lrmd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: lrmd
Nov 13 13:44:20 [15880] vm1       lrmd: (      main.c:313   )    info: main: 	Starting
Nov 13 13:44:20 [15878] vm1        cib: (        io.c:412   ) warning: readCibXmlFile: 	Continuing with an empty configuration.
Nov 13 13:44:20 [15878] vm1        cib: (       xml.c:2627  )    info: validate_with_relaxng: 	Creating RNG parser context
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15881-29)
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15881]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15882] vm1    pengine: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 [15882] vm1    pengine: (      main.c:168   )   debug: main: 	Init server comms
Nov 13 13:44:20 [15882] vm1    pengine: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: pengine
Nov 13 13:44:20 [15882] vm1    pengine: (      main.c:176   )    info: main: 	Starting pengine
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15883] vm1       crmd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 [15883] vm1       crmd: (      main.c:97    )  notice: main: 	CRM Git Version: 2383f6c
Nov 13 13:44:20 [15883] vm1       crmd: (      main.c:134   )   debug: crmd_init: 	Starting crmd
Nov 13 13:44:20 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
Nov 13 13:44:20 [15883] vm1       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Nov 13 13:44:20 [15883] vm1       crmd: (   control.c:488   )   debug: do_startup: 	Registering Signal Handlers
Nov 13 13:44:20 [15883] vm1       crmd: (   control.c:495   )   debug: do_startup: 	Creating CIB and LRM objects
Nov 13 13:44:20 [15883] vm1       crmd: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [15883] vm1       crmd: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [15883] vm1       crmd: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish cib_shm connection: Connection refused (111)
Nov 13 13:44:20 [15883] vm1       crmd: (cib_native.c:229   )   debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Nov 13 13:44:20 [15883] vm1       crmd: (cib_native.c:272   )   debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Nov 13 13:44:20 [15883] vm1       crmd: (cib_native.c:282   )   debug: cib_native_signoff: 	Signing out of the CIB Service
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7fa78825cdb0, cpd=0x7fa78825d4d4
Nov 13 13:44:20 [15874] vm1 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15874] vm1 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-28-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 8c58deab-c20e-41d3-b0da-8ece9c0ac2f1/0x10fabe0 for node (null)/3232261519 (3 total)
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 8c58deab-c20e-41d3-b0da-8ece9c0ac2f1/0x10fabe0 for node (null)/3232261519 (3 total)
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261519
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261519
Nov 13 13:44:20 [15881] vm1      attrd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261517
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15879-30)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15879]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15878] vm1        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for start op
Nov 13 13:44:20 [15878] vm1        cib: (      main.c:586   )    info: startCib: 	CIB Initialization completed successfully
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15878] vm1        cib: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7fa78825e8d0, cpd=0x7fa78825f024
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15874-28)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15874-28) state:2
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa788360010
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-28-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-28-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-28-header
Nov 13 13:44:20 [15881] vm1      attrd: (membership.c:399   )    info: crm_get_peer: 	Created entry 4a155865-3b20-4c1b-92ef-fc18f238821f/0x1fed130 for node (null)/3232261517 (1 total)
Nov 13 13:44:20 [15881] vm1      attrd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 [15881] vm1      attrd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:20 [15881] vm1      attrd: (membership.c:607   )  notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:20 [15881] vm1      attrd: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:20 [15879] vm1 stonith-ng: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261517
Nov 13 13:44:20 [15879] vm1 stonith-ng: (membership.c:399   )    info: crm_get_peer: 	Created entry c18fea60-cb54-4d98-abd8-87fca3b361ad/0x19436a0 for node (null)/3232261517 (1 total)
Nov 13 13:44:20 [15879] vm1 stonith-ng: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 [15879] vm1 stonith-ng: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:20 [15879] vm1 stonith-ng: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15878-28)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15878]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7fa788361260, cpd=0x7fa78835fc54
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15874-31)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15874]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa7882604a0
Nov 13 13:44:20 [15878] vm1        cib: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261517
Nov 13 13:44:20 [15874] vm1 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15874] vm1 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-31-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-31-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-31-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-31-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-31-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-31-header
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:20 [15874] vm1 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15881-32)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15881]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa788266d00
Nov 13 13:44:20 [15878] vm1        cib: (membership.c:399   )    info: crm_get_peer: 	Created entry 53f2667c-1243-48e0-b350-5b1413d2c3ab/0x1352360 for node (null)/3232261517 (1 total)
Nov 13 13:44:20 [15878] vm1        cib: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 [15878] vm1        cib: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:20 [15878] vm1        cib: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15874-31)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15874-31) state:2
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa7882604a0
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15874-31-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15874-31-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15874-31-header
Nov 13 13:44:20 [15881] vm1      attrd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15881] vm1      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15881-32-header
Nov 13 13:44:20 [15881] vm1      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15881-32-header
Nov 13 13:44:20 [15881] vm1      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15881-32-header
Nov 13 13:44:20 [15881] vm1      attrd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:20 [15881] vm1      attrd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [15881] vm1      attrd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:20 [15881] vm1      attrd: (      main.c:323   )    info: main: 	Cluster connection active
Nov 13 13:44:20 [15881] vm1      attrd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: attrd
Nov 13 13:44:20 [15881] vm1      attrd: (      main.c:327   )    info: main: 	Accepting attribute updates
Nov 13 13:44:20 [15881] vm1      attrd: (      main.c:149   )   debug: attrd_cib_connect: 	CIB signon attempt 1
Nov 13 13:44:20 [15881] vm1      attrd: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish cib_rw connection: Connection refused (111)
Nov 13 13:44:20 [15881] vm1      attrd: (cib_native.c:229   )   debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Nov 13 13:44:20 [15881] vm1      attrd: (cib_native.c:272   )   debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Nov 13 13:44:20 [15881] vm1      attrd: (cib_native.c:282   )   debug: cib_native_signoff: 	Signing out of the CIB Service
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15881-32)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15881-32) state:2
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa788266d00
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15881-32-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15881-32-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15881-32-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15879-31)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15879]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa788267a40
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000000000000)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000000000000)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [15879] vm1 stonith-ng: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15879] vm1 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15879-31-header
Nov 13 13:44:20 [15879] vm1 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15879-31-header
Nov 13 13:44:20 [15879] vm1 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15879-31-header
Nov 13 13:44:20 [15879] vm1 stonith-ng: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:20 [15879] vm1 stonith-ng: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [15879] vm1 stonith-ng: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:20 [15879] vm1 stonith-ng: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish cib_rw connection: Connection refused (111)
Nov 13 13:44:20 [15879] vm1 stonith-ng: (cib_native.c:229   )   debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Nov 13 13:44:20 [15879] vm1 stonith-ng: (cib_native.c:272   )   debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Nov 13 13:44:20 [15879] vm1 stonith-ng: (cib_native.c:282   )   debug: cib_native_signoff: 	Signing out of the CIB Service
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 460
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:20 [15874] vm1 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000000000000)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000000000000)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15881
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15879
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15878
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15878-32)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15878]
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa788265e40
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15879-31)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15879-31) state:2
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa788267a40
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15879-31-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15879-31-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15879-31-header
Nov 13 13:44:20 [15878] vm1        cib: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [15878] vm1        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15878-32-header
Nov 13 13:44:20 [15878] vm1        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15878-32-header
Nov 13 13:44:20 [15878] vm1        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15878-32-header
Nov 13 13:44:20 [15878] vm1        cib: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:20 [15878] vm1        cib: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [15878] vm1        cib: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:20 [15878] vm1        cib: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: cib_ro
Nov 13 13:44:20 [15878] vm1        cib: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: cib_rw
Nov 13 13:44:20 [15878] vm1        cib: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: cib_shm
Nov 13 13:44:20 [15878] vm1        cib: (      main.c:550   )    info: cib_init: 	Starting cib mainloop
Nov 13 13:44:20 [15878] vm1        cib: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] cib.3232261517 
Nov 13 13:44:20 [15878] vm1        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] cib.3232261517 
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15878-32)
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15878-32) state:2
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa788265e40
Nov 13 13:44:20 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15878-32-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15878-32-header
Nov 13 13:44:20 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15878-32-header
Nov 13 13:44:20 [15878] vm1        cib: (     utils.c:1216  )   debug: get_last_sequence: 	Series file /var/lib/pacemaker/cib/cib.last does not exist
Nov 13 13:44:20 [15878] vm1        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:44:20 [15878] vm1        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.0.0 of the CIB to disk (digest: 978cb58a57d1ff0f3e53e793331143d7)
Nov 13 13:44:20 [15878] vm1        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 978cb58a57d1ff0f3e53e793331143d7 to disk
Nov 13 13:44:20 [15878] vm1        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.rjUedC (digest: /var/lib/pacemaker/cib/cib.Ceepd4)
Nov 13 13:44:20 [15878] vm1        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.rjUedC
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31462
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31460
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [15874] vm1 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31459
Nov 13 13:44:20 [15878] vm1        cib: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[1.0] cib.3232261518 
Nov 13 13:44:20 [15878] vm1        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.0] cib.3232261517 
Nov 13 13:44:20 [15878] vm1        cib: (membership.c:399   )    info: crm_get_peer: 	Created entry ed2a0024-eb7a-4964-8d99-8ae499b14693/0x13552a0 for node (null)/3232261518 (2 total)
Nov 13 13:44:20 [15878] vm1        cib: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:20 [15878] vm1        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.1] cib.3232261518 
Nov 13 13:44:20 [15878] vm1        cib: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:21 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 467
Nov 13 13:44:21 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 465
Nov 13 13:44:21 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 464
Nov 13 13:44:21 [15878] vm1        cib: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[2.0] cib.3232261519 
Nov 13 13:44:21 [15878] vm1        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.0] cib.3232261517 
Nov 13 13:44:21 [15878] vm1        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.1] cib.3232261518 
Nov 13 13:44:21 [15878] vm1        cib: (membership.c:399   )    info: crm_get_peer: 	Created entry 1be75549-bb16-43ef-8f89-0ae7866f5052/0x1355310 for node (null)/3232261519 (3 total)
Nov 13 13:44:21 [15878] vm1        cib: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 [15878] vm1        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.2] cib.3232261519 
Nov 13 13:44:21 [15878] vm1        cib: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 [15878] vm1        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1355380 for uid=496 gid=492 pid=15883 id=7bc8a5ad-4bdd-48b2-981b-bfd88861945f
Nov 13 13:44:21 [15878] vm1        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15878-15883-10)
Nov 13 13:44:21 [15878] vm1        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15883]
Nov 13 13:44:21 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15883] vm1       crmd: (cib_native.c:268   )   debug: cib_native_signon_raw: 	Connection to CIB successful
Nov 13 13:44:21 [15878] vm1        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_refresh_notify callbacks for crmd (7bc8a5ad-4bdd-48b2-981b-bfd88861945f): on
Nov 13 13:44:21 [15878] vm1        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crmd (7bc8a5ad-4bdd-48b2-981b-bfd88861945f): on
Nov 13 13:44:21 [15883] vm1       crmd: (       cib.c:215   )    info: do_cib_control: 	CIB connection established
Nov 13 13:44:21 [15883] vm1       crmd: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15883-31)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15883]
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [15863] vm1 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7fa788265e40, cpd=0x7fa788267b24
Nov 13 13:44:21 [15883] vm1       crmd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261517
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:399   )    info: crm_get_peer: 	Created entry 21dfed2a-ea12-4608-98b4-45fc01561e11/0x1414c50 for node (null)/3232261517 (1 total)
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15883-32)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15883]
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa7882605d0
Nov 13 13:44:21 [15883] vm1       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-32-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-32-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-32-header
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:21 [15883] vm1       crmd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:21 [15883] vm1       crmd: ( callbacks.c:118   )    info: peer_update_callback: 	vm1 is now (null)
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:255   )   debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15883-32)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15883-32) state:2
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa7882605d0
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-32-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-32-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-32-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15883
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15883-32)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15883]
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:quorum_lib_init_fn:316 lib_init_fn: conn=0x7fa788260790
Nov 13 13:44:21 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_gettype:471 got quorum_type request on 0x7fa788260790
Nov 13 13:44:21 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_getquorate:395 got quorate request on 0x7fa788260790
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:273   )  notice: cluster_connect_quorum: 	Quorum acquired
Nov 13 13:44:21 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:412 got trackstart request on 0x7fa788260790
Nov 13 13:44:21 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:420 sending initial status to 0x7fa788260790
Nov 13 13:44:21 [15863] vm1 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to 0x7fa788260790, length = 60
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15883-33)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15883]
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa788262420
Nov 13 13:44:21 [15883] vm1       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15883-33)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15883-33) state:2
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa788262420
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15883-33)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15883]
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa788262420
Nov 13 13:44:21 [15883] vm1       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-33-header
Nov 13 13:44:21 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.0.0)
Nov 13 13:44:21 [15883] vm1       crmd: (   control.c:146   )    info: do_ha_control: 	Connected to the cluster
Nov 13 13:44:21 [15883] vm1       crmd: (       lrm.c:299   )   debug: do_lrm_control: 	Connecting to the LRM
Nov 13 13:44:21 [15883] vm1       crmd: (lrmd_client.:938   )    info: lrmd_ipc_connect: 	Connecting to lrmd
Nov 13 13:44:21 [15880] vm1       lrmd: (      main.c:89    )   trace: lrmd_ipc_accept: 	Connection 0x1b86c10
Nov 13 13:44:21 [15880] vm1       lrmd: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1b86c10 for uid=496 gid=492 pid=15883 id=b88cb348-8886-42a5-bb3d-9ea70cadc946
Nov 13 13:44:21 [15880] vm1       lrmd: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15880-15883-6)
Nov 13 13:44:21 [15880] vm1       lrmd: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15883]
Nov 13 13:44:21 [15880] vm1       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [15880] vm1       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [15880] vm1       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15883-33)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15883-33) state:2
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa788262420
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-33-header
Nov 13 13:44:21 [15880] vm1       lrmd: (      main.c:99    )   trace: lrmd_ipc_created: 	Connection 0x1b86c10
Nov 13 13:44:21 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 6
Nov 13 13:44:21 [15883] vm1       crmd: (       lrm.c:321   )    info: do_lrm_control: 	LRM connection established
Nov 13 13:44:21 [15883] vm1       crmd: (   control.c:768   )    info: do_started: 	Delaying start, no membership data (0000000000100000)
Nov 13 13:44:21 [15883] vm1       crmd: (  messages.c:90    )   debug: register_fsa_input_adv: 	Stalling the FSA pending further input: source=do_started cause=C_FSA_INTERNAL data=(nil) queue=0
Nov 13 13:44:21 [15883] vm1       crmd: (       fsa.c:240   )   debug: s_crmd_fsa: 	Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
Nov 13 13:44:21 [15883] vm1       crmd: (      main.c:142   )   trace: crmd_init: 	Starting crmd's mainloop
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:191   )    info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm1[3232261517] - state is now member (was (null))
Nov 13 13:44:21 [15883] vm1       crmd: ( callbacks.c:124   )    info: peer_update_callback: 	vm1 is now member (was (null))
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:399   )    info: crm_get_peer: 	Created entry e2a0f72f-5a96-4feb-ab42-b7ebcea911d0/0x155bdb0 for node (null)/3232261518 (2 total)
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261518
Nov 13 13:44:21 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.0.0)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15883-33)
Nov 13 13:44:21 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed register operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15883]
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa7882622c0
Nov 13 13:44:21 [15883] vm1       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:399   )    info: crm_get_peer: 	Created entry cb5f37c1-f59c-4417-88eb-1a18addbcd1a/0x15598e0 for node (null)/3232261519 (3 total)
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261519
Nov 13 13:44:21 [15881] vm1      attrd: (      main.c:149   )   debug: attrd_cib_connect: 	CIB signon attempt 2
Nov 13 13:44:21 [15878] vm1        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x11a3be0 for uid=496 gid=492 pid=15881 id=7dadebe1-bb78-4c38-bc59-37c0c7c29b82
Nov 13 13:44:21 [15878] vm1        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15878-15881-11)
Nov 13 13:44:21 [15878] vm1        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15881]
Nov 13 13:44:21 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15883-33)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15883-33) state:2
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa7882622c0
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15883-33)
Nov 13 13:44:21 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15883]
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15881] vm1      attrd: (cib_native.c:268   )   debug: cib_native_signon_raw: 	Connection to CIB successful
Nov 13 13:44:21 [15881] vm1      attrd: (      main.c:159   )    info: attrd_cib_connect: 	Connected to the CIB after 2 attempts
Nov 13 13:44:21 [15878] vm1        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_refresh_notify callbacks for attrd (7dadebe1-bb78-4c38-bc59-37c0c7c29b82): on
Nov 13 13:44:21 [15881] vm1      attrd: (      main.c:335   )    info: main: 	CIB connection active
Nov 13 13:44:21 [15881] vm1      attrd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] attrd.3232261517 
Nov 13 13:44:21 [15881] vm1      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] attrd.3232261517 
Nov 13 13:44:21 [15881] vm1      attrd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[1.0] attrd.3232261518 
Nov 13 13:44:21 [15881] vm1      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.0] attrd.3232261517 
Nov 13 13:44:21 [15881] vm1      attrd: (membership.c:399   )    info: crm_get_peer: 	Created entry c3446606-a271-488b-a38a-a1269f3c03ad/0x1ff2f00 for node (null)/3232261518 (2 total)
Nov 13 13:44:21 [15881] vm1      attrd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:21 [15881] vm1      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.1] attrd.3232261518 
Nov 13 13:44:21 [15881] vm1      attrd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:21 [15881] vm1      attrd: (membership.c:607   )  notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:21 [15881] vm1      attrd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[2.0] attrd.3232261519 
Nov 13 13:44:21 [15881] vm1      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.0] attrd.3232261517 
Nov 13 13:44:21 [15881] vm1      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.1] attrd.3232261518 
Nov 13 13:44:21 [15881] vm1      attrd: (membership.c:399   )    info: crm_get_peer: 	Created entry c8bbef81-9b84-4c05-b435-f946b89cb77d/0x1ff2f70 for node (null)/3232261519 (3 total)
Nov 13 13:44:21 [15881] vm1      attrd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 [15881] vm1      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.2] attrd.3232261519 
Nov 13 13:44:21 [15881] vm1      attrd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 [15881] vm1      attrd: (membership.c:607   )  notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261519] - state is now member (was (null))
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15878] vm1        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x13d6370 for uid=0 gid=0 pid=15879 id=6b4f4e55-9d9b-4e29-aa80-b9806663187b
Nov 13 13:44:21 [15878] vm1        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15878-15879-12)
Nov 13 13:44:21 [15878] vm1        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15879]
Nov 13 13:44:21 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:21 [15879] vm1 stonith-ng: (cib_native.c:268   )   debug: cib_native_signon_raw: 	Connection to CIB successful
Nov 13 13:44:21 [15878] vm1        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crmd (6b4f4e55-9d9b-4e29-aa80-b9806663187b): on
Nov 13 13:44:21 [15879] vm1 stonith-ng: (      main.c:978   )  notice: setup_cib: 	Watching for stonith topology changes
Nov 13 13:44:21 [15879] vm1 stonith-ng: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: stonith-ng
Nov 13 13:44:21 [15879] vm1 stonith-ng: (      main.c:1208  )    info: main: 	Starting stonith-ng mainloop
Nov 13 13:44:21 [15879] vm1 stonith-ng: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] stonith-ng.3232261517 
Nov 13 13:44:21 [15879] vm1 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] stonith-ng.3232261517 
Nov 13 13:44:21 [15879] vm1 stonith-ng: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[1.0] stonith-ng.3232261518 
Nov 13 13:44:21 [15879] vm1 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.0] stonith-ng.3232261517 
Nov 13 13:44:21 [15879] vm1 stonith-ng: (membership.c:399   )    info: crm_get_peer: 	Created entry ed84558a-74e8-4b26-b508-558824c85888/0x1947790 for node (null)/3232261518 (2 total)
Nov 13 13:44:21 [15879] vm1 stonith-ng: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:21 [15879] vm1 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.1] stonith-ng.3232261518 
Nov 13 13:44:21 [15879] vm1 stonith-ng: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:21 [15879] vm1 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261518
Nov 13 13:44:21 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa7882622c0
Nov 13 13:44:21 [15883] vm1       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:81    )   debug: post_cache_update: 	Updated cache after membership event 12.
Nov 13 13:44:21 [15883] vm1       crmd: (membership.c:95    )   debug: post_cache_update: 	post_cache_update added action A_ELECTION_CHECK to the FSA
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15883-33)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15883-33) state:2
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa7882622c0
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15879-33)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15879]
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa7882622c0
Nov 13 13:44:21 [15879] vm1 stonith-ng: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [15879] vm1 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15879-33-header
Nov 13 13:44:21 [15879] vm1 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15879-33-header
Nov 13 13:44:21 [15879] vm1 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15879-33-header
Nov 13 13:44:21 [15879] vm1 stonith-ng: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:21 [15879] vm1 stonith-ng: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:21 [15879] vm1 stonith-ng: (      main.c:878   )    info: init_cib_cache_cb: 	Updating device list from the cib: init
Nov 13 13:44:21 [15879] vm1 stonith-ng: (      main.c:568   )   trace: fencing_topology_init: 	Pushing in stonith topology
Nov 13 13:44:21 [15879] vm1 stonith-ng: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:44:21 [15879] vm1 stonith-ng: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:44:21 [15879] vm1 stonith-ng: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:44:21 [15879] vm1 stonith-ng: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:44:21 [15879] vm1 stonith-ng: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:44:21 [15879] vm1 stonith-ng: (    unpack.c:155   )   debug: unpack_config: 	On loss of CCM Quorum: Stop ALL resources
Nov 13 13:44:21 [15879] vm1 stonith-ng: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:44:21 [15879] vm1 stonith-ng: (    unpack.c:486   )    info: unpack_nodes: 	Creating a fake local node
Nov 13 13:44:21 [15879] vm1 stonith-ng: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15883-34)
Nov 13 13:44:21 [15879] vm1 stonith-ng: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[2.0] stonith-ng.3232261519 
Nov 13 13:44:21 [15879] vm1 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.0] stonith-ng.3232261517 
Nov 13 13:44:21 [15879] vm1 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.1] stonith-ng.3232261518 
Nov 13 13:44:21 [15879] vm1 stonith-ng: (membership.c:399   )    info: crm_get_peer: 	Created entry 0e7ba1e2-ac96-414f-8f91-447bb3876068/0x1948a50 for node (null)/3232261519 (3 total)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15883]
Nov 13 13:44:21 [15879] vm1 stonith-ng: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 [15879] vm1 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.2] stonith-ng.3232261519 
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15879] vm1 stonith-ng: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 [15879] vm1 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261519
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa788263e00
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15879-33)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15879-33) state:2
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa7882622c0
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15879-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15879-33-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15879-33-header
Nov 13 13:44:21 [15883] vm1       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-34-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-34-header
Nov 13 13:44:21 [15883] vm1       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-34-header
Nov 13 13:44:21 [15883] vm1       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:21 [15883] vm1       crmd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:21 [15883] vm1       crmd: (   control.c:786   )    info: do_started: 	Delaying start, Config not read (0000000000000040)
Nov 13 13:44:21 [15883] vm1       crmd: (  messages.c:90    )   debug: register_fsa_input_adv: 	Stalling the FSA pending further input: source=do_started cause=C_FSA_INTERNAL data=(nil) queue=0
Nov 13 13:44:21 [15883] vm1       crmd: (       fsa.c:240   )   debug: s_crmd_fsa: 	Exiting the FSA: queue=0, fsa_actions=0x200000002, stalled=true
Nov 13 13:44:21 [15883] vm1       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 4 : Parsing CIB options
Nov 13 13:44:21 [15883] vm1       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:44:21 [15883] vm1       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:44:21 [15883] vm1       crmd: (   control.c:812   )   debug: do_started: 	Init server comms
Nov 13 13:44:21 [15883] vm1       crmd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: crmd
Nov 13 13:44:21 [15883] vm1       crmd: (   control.c:827   )  notice: do_started: 	The local CRM is operational
Nov 13 13:44:21 [15883] vm1       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:44:21 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
Nov 13 13:44:21 [15883] vm1       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_started() received in state S_STARTING
Nov 13 13:44:21 [15883] vm1       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Nov 13 13:44:21 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.0.0)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15883-34)
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15883-34) state:2
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa788263e00
Nov 13 13:44:21 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15883-34-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15883-34-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15883-34-header
Nov 13 13:44:21 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31464
Nov 13 13:44:22 [15879] vm1 stonith-ng: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:22 [15879] vm1 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261518
Nov 13 13:44:22 [15863] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 469
Nov 13 13:44:22 [15879] vm1 stonith-ng: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:22 [15879] vm1 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261519
Nov 13 13:44:22 [15883] vm1       crmd: (join_client.:46    )   debug: do_cl_join_query: 	Querying for a DC
Nov 13 13:44:22 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=17
Nov 13 13:44:22 [15883] vm1       crmd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] crmd.3232261517 
Nov 13 13:44:22 [15883] vm1       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] crmd.3232261517 
Nov 13 13:44:22 [15883] vm1       crmd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[1.0] crmd.3232261518 
Nov 13 13:44:22 [15883] vm1       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.0] crmd.3232261517 
Nov 13 13:44:22 [15883] vm1       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[1.1] crmd.3232261518 
Nov 13 13:44:22 [15883] vm1       crmd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:22 [15883] vm1       crmd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[2.0] crmd.3232261519 
Nov 13 13:44:22 [15883] vm1       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.0] crmd.3232261517 
Nov 13 13:44:22 [15883] vm1       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.1] crmd.3232261518 
Nov 13 13:44:22 [15883] vm1       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[2.2] crmd.3232261519 
Nov 13 13:44:22 [15883] vm1       crmd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:22 [15883] vm1       crmd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:22 [15883] vm1       crmd: ( callbacks.c:118   )    info: peer_update_callback: 	vm2 is now member
Nov 13 13:44:22 [15883] vm1       crmd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:22 [15883] vm1       crmd: ( callbacks.c:118   )    info: peer_update_callback: 	vm3 is now member
Nov 13 13:44:22 [15883] vm1       crmd: (  te_utils.c:248   )   debug: te_connect_stonith: 	Attempting connection to fencing daemon...
Nov 13 13:44:23 [15878] vm1        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x13d68d0 for uid=0 gid=0 pid=14287 id=7e66a1ab-3f5c-494d-ace7-2c089552d090
Nov 13 13:44:23 [15878] vm1        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15878-14287-13)
Nov 13 13:44:23 [15878] vm1        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [14287]
Nov 13 13:44:23 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:23 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:23 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:23 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_mon/5, version=0.0.0)
Nov 13 13:44:23 [15878] vm1        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crm_mon (7e66a1ab-3f5c-494d-ace7-2c089552d090): off
Nov 13 13:44:23 [15878] vm1        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crm_mon (7e66a1ab-3f5c-494d-ace7-2c089552d090): on
Nov 13 13:44:23 [15879] vm1 stonith-ng: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x194ab50 for uid=496 gid=492 pid=15883 id=e6cedc3f-e233-4a66-9291-ce359bd76aad
Nov 13 13:44:23 [15879] vm1 stonith-ng: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15879-15883-9)
Nov 13 13:44:23 [15879] vm1 stonith-ng: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15883]
Nov 13 13:44:23 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:23 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:23 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:23 [15879] vm1 stonith-ng: (      main.c:87    )   trace: st_ipc_created: 	Connection created for 0x194ab50
Nov 13 13:44:23 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:23 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:23 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:23 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 9 from crmd.15883
Nov 13 13:44:23 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command t="stonith-ng" st_op="register" st_clientname="crmd.15883" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientnode="vm1"/>
Nov 13 13:44:23 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing register 9 from crmd.15883 (               0)
Nov 13 13:44:23 [15883] vm1       crmd: ( st_client.c:1639  )   debug: stonith_api_signon: 	Connection to STONITH successful
Nov 13 13:44:23 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed register from crmd.15883: OK (0)
Nov 13 13:44:23 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 10 from crmd.15883
Nov 13 13:44:23 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_set_notification st_op="st_notify" st_notify_activate="st_notify_disconnect" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1"/>
Nov 13 13:44:23 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify 10 from crmd.15883 (               0)
Nov 13 13:44:23 [15879] vm1 stonith-ng: (  commands.c:1822  )   debug: handle_request: 	Setting st_notify_disconnect callbacks for crmd.15883 (e6cedc3f-e233-4a66-9291-ce359bd76aad): ON
Nov 13 13:44:23 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify from crmd.15883: OK (0)
Nov 13 13:44:23 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 11 from crmd.15883
Nov 13 13:44:23 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_set_notification st_op="st_notify" st_notify_activate="st_notify_fence" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1"/>
Nov 13 13:44:23 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify 11 from crmd.15883 (               0)
Nov 13 13:44:23 [15879] vm1 stonith-ng: (  commands.c:1822  )   debug: handle_request: 	Setting st_notify_fence callbacks for crmd.15883 (e6cedc3f-e233-4a66-9291-ce359bd76aad): ON
Nov 13 13:44:23 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify from crmd.15883: OK (0)
Nov 13 13:44:42 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Nov 13 13:44:42 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_DC_TIMEOUT: [ state=S_PENDING cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:44:42 [15883] vm1       crmd: (      misc.c:47    ) warning: do_log: 	FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Nov 13 13:44:42 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:172   )   debug: crm_uptime: 	Current CPU usage is: 0s, 31995us
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:242   )   debug: election_vote: 	Started election 1
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 1 (current: 1, owner: 3232261517): Processed vote from vm1 (Recorded)
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:297   )   debug: election_check: 	Still waiting on 2 non-votes (3 total)
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 1 (current: 1, owner: 3232261517): Processed no-vote from vm2 (Recorded)
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:297   )   debug: election_check: 	Still waiting on 1 non-votes (3 total)
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 1 (current: 1, owner: 3232261517): Processed no-vote from vm3 (Recorded)
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:50    )    info: election_timer_cb: 	Election election-0 complete
Nov 13 13:44:42 [15883] vm1       crmd: (   control.c:60    )    info: election_timeout_popped: 	Election failed: Declaring ourselves the winner
Nov 13 13:44:42 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_TIMER_POPPED origin=election_timeout_popped ]
Nov 13 13:44:42 [15883] vm1       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_ELECTION_DC from election_timeout_popped() received in state S_ELECTION
Nov 13 13:44:42 [15883] vm1       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]
Nov 13 13:44:42 [15883] vm1       crmd: (   tengine.c:107   )    info: do_te_control: 	Registering TE UUID: 154fb289-24e8-407e-9a03-69a510480b60
Nov 13 13:44:42 [15878] vm1        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crmd (7bc8a5ad-4bdd-48b2-981b-bfd88861945f): on
Nov 13 13:44:42 [15883] vm1       crmd: (     utils.c:72    )    info: set_graph_functions: 	Setting custom graph functions
Nov 13 13:44:42 [15883] vm1       crmd: (   tengine.c:128   )   debug: do_te_control: 	Transitioner is now active
Nov 13 13:44:42 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition -1: 0 actions in 0 synapses
Nov 13 13:44:42 [15882] vm1    pengine: (      main.c:49    )   trace: pe_ipc_accept: 	Connection 0x1caa6f0
Nov 13 13:44:42 [15882] vm1    pengine: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1caa6f0 for uid=496 gid=492 pid=15883 id=c9c42a36-df9d-47e0-a006-f20302ba50d7
Nov 13 13:44:42 [15882] vm1    pengine: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15882-15883-6)
Nov 13 13:44:42 [15882] vm1    pengine: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15883]
Nov 13 13:44:42 [15882] vm1    pengine: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Nov 13 13:44:42 [15882] vm1    pengine: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Nov 13 13:44:42 [15882] vm1    pengine: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Nov 13 13:44:42 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Nov 13 13:44:42 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Nov 13 13:44:42 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Nov 13 13:44:42 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Integration Timer (I_INTEGRATED:180000ms), src=21
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:178   )    info: do_dc_takeover: 	Taking over DC status for this partition
Nov 13 13:44:42 [15878] vm1        cib: (  messages.c:162   )    info: cib_process_readwrite: 	We are now in R/W mode
Nov 13 13:44:42 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.0.0)
Nov 13 13:44:42 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.0.0
Nov 13 13:44:42 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.0.1 335eff11d8e47ed96126ba44f4ec45e7
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="0"/>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="0" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8"/>
Nov 13 13:44:42 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.0.1)
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15878-33)
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15878]
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:42 [15882] vm1    pengine: (      main.c:59    )   trace: pe_ipc_created: 	Connection 0x1caa6f0
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:42 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:42 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:42 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:42 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:42 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa7882647b0
Nov 13 13:44:42 [15878] vm1        cib: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:42 [15878] vm1        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15878-33-header
Nov 13 13:44:42 [15878] vm1        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15878-33-header
Nov 13 13:44:42 [15878] vm1        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15878-33-header
Nov 13 13:44:42 [15878] vm1        cib: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:42 [15878] vm1        cib: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:42 [15878] vm1        cib: (   cib_ops.c:905   )   debug: cib_process_xpath: 	cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] does not exist
Nov 13 13:44:42 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: No such device or address (rc=-6, origin=local/crmd/8, version=0.0.1)
Nov 13 13:44:42 [15878] vm1        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_modify op
Nov 13 13:44:42 [15879] vm1 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.1.1
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib admin_epoch="0" epoch="0" num_updates="1"/>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="1" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:42 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </cluster_property_set>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:42 [15878] vm1        cib: ( cib_utils.c:174   )  notice: log_cib_diff: 	cib:diff: Local-only Change: 0.1.1
Nov 13 13:44:42 [15878] vm1        cib: (       xml.c:1496  )  notice: cib:diff: 	-- <cib admin_epoch="0" epoch="0" num_updates="1"/>
Nov 13 13:44:42 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:44:42 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:44:42 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       </cluster_property_set>
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15878-33)
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15878-33) state:2
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:42 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.1.1)
Nov 13 13:44:42 [15878] vm1        cib: (   cib_ops.c:905   )   debug: cib_process_xpath: 	cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] does not exist
Nov 13 13:44:42 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:42 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa7882647b0
Nov 13 13:44:42 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15878-33-header
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15878-33-header
Nov 13 13:44:42 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: No such device or address (rc=-6, origin=local/crmd/10, version=0.1.1)
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:82    )   debug: initialize_join: 	join-1: Initializing join data (flag=true)
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:125   )    info: join_make_offer: 	Making join offers based on membership 12
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-1: Sending offer to vm3
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm3[3232261519] - join-1 phase 0 -> 1
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-1: Sending offer to vm1
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm1[3232261517] - join-1 phase 0 -> 1
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-1: Sending offer to vm2
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm2[3232261518] - join-1 phase 0 -> 1
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:173   )    info: do_dc_join_offer_all: 	join-1: Waiting on 3 outstanding join acks
Nov 13 13:44:42 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_ELECTION_DC: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_election_check ]
Nov 13 13:44:42 [15883] vm1       crmd: (      misc.c:47    ) warning: do_log: 	FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION
Nov 13 13:44:42 [15883] vm1       crmd: (  election.c:242   )   debug: election_vote: 	Started election 2
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:82    )   debug: initialize_join: 	join-2: Initializing join data (flag=true)
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:61    )    info: crm_update_peer_join: 	initialize_join: Node vm3[3232261519] - join-2 phase 1 -> 0
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:61    )    info: crm_update_peer_join: 	initialize_join: Node vm1[3232261517] - join-2 phase 1 -> 0
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:61    )    info: crm_update_peer_join: 	initialize_join: Node vm2[3232261518] - join-2 phase 1 -> 0
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-2: Sending offer to vm3
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm3[3232261519] - join-2 phase 0 -> 1
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-2: Sending offer to vm1
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm1[3232261517] - join-2 phase 0 -> 1
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-2: Sending offer to vm2
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm2[3232261518] - join-2 phase 0 -> 1
Nov 13 13:44:42 [15883] vm1       crmd: (   join_dc.c:173   )    info: do_dc_join_offer_all: 	join-2: Waiting on 3 outstanding join acks
Nov 13 13:44:42 [15878] vm1        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_modify op
Nov 13 13:44:42 [15879] vm1 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.2.1
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib admin_epoch="0" epoch="1" num_updates="1"/>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="2" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:42 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:42 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:42 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15878-33-header
Nov 13 13:44:43 [15878] vm1        cib: ( cib_utils.c:174   )  notice: log_cib_diff: 	cib:diff: Local-only Change: 0.2.1
Nov 13 13:44:43 [15878] vm1        cib: (       xml.c:1496  )  notice: cib:diff: 	-- <cib admin_epoch="0" epoch="1" num_updates="1"/>
Nov 13 13:44:43 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.2.1)
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.2.1)
Nov 13 13:44:43 [15883] vm1       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 12 : Parsing CIB options
Nov 13 13:44:43 [15883] vm1       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:44:43 [15883] vm1       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/13, version=0.2.1)
Nov 13 13:44:43 [15883] vm1       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 13 : Parsing CIB options
Nov 13 13:44:43 [15883] vm1       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:44:43 [15883] vm1       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/14, version=0.2.1)
Nov 13 13:44:43 [15883] vm1       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 14 : Parsing CIB options
Nov 13 13:44:43 [15883] vm1       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:44:43 [15883] vm1       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:44:43 [15883] vm1       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-1
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:981   )    info: update_dc: 	Set DC to vm1 (3.0.8)
Nov 13 13:44:43 [15883] vm1       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:44:43 [15883] vm1       crmd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:43 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 2 (current: 2, owner: 3232261517): Processed vote from vm1 (Recorded)
Nov 13 13:44:43 [15883] vm1       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:44:43 [15883] vm1       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-2
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [15883] vm1       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/15, version=0.2.1)
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/16, version=0.2.1)
Nov 13 13:44:43 [15883] vm1       crmd: (join_client.:157   )   debug: join_query_callback: 	Respond to join offer join-2
Nov 13 13:44:43 [15883] vm1       crmd: (join_client.:158   )   debug: join_query_callback: 	Acknowledging vm1 as our DC
Nov 13 13:44:43 [15878] vm1        cib: (     utils.c:1216  )   debug: get_last_sequence: 	Series file /var/lib/pacemaker/cib/cib.last does not exist
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-0.raw
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.2.0 of the CIB to disk (digest: 7c397f6c57041145e23f3494e809aec1)
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:282   )   debug: do_dc_join_filter_offer: 	Processing req from vm3
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:303   )   debug: do_dc_join_filter_offer: 	Invalid response from vm3: join-1 vs. join-2
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:282   )   debug: do_dc_join_filter_offer: 	Processing req from vm1
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:341   )   debug: do_dc_join_filter_offer: 	join-2: Welcoming node vm1 (ref join_request-crmd-1384317883-10)
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm1[3232261517] - join-2 phase 1 -> 2
Nov 13 13:44:43 [15883] vm1       crmd: (membership.c:579   )    info: crm_update_peer_expected: 	do_dc_join_filter_offer: Node vm1[3232261517] - expected state is now member (was (null))
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:348   )   debug: do_dc_join_filter_offer: 	1 nodes have been integrated into join-2
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:354   )   debug: do_dc_join_filter_offer: 	join-2: Still waiting on 2 outstanding offers
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 7c397f6c57041145e23f3494e809aec1 to disk
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.LJIZDB (digest: /var/lib/pacemaker/cib/cib.f4K371)
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.LJIZDB
Nov 13 13:44:43 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 2 (current: 2, owner: 3232261517): Processed no-vote from vm2 (Recorded)
Nov 13 13:44:43 [15883] vm1       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:282   )   debug: do_dc_join_filter_offer: 	Processing req from vm2
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:341   )   debug: do_dc_join_filter_offer: 	join-2: Welcoming node vm2 (ref join_request-crmd-1384317883-4)
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm2[3232261518] - join-2 phase 1 -> 2
Nov 13 13:44:43 [15883] vm1       crmd: (membership.c:579   )    info: crm_update_peer_expected: 	do_dc_join_filter_offer: Node vm2[3232261518] - expected state is now member (was (null))
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:348   )   debug: do_dc_join_filter_offer: 	2 nodes have been integrated into join-2
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:354   )   debug: do_dc_join_filter_offer: 	join-2: Still waiting on 1 outstanding offers
Nov 13 13:44:43 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 2 (current: 2, owner: 3232261517): Processed no-vote from vm3 (Recorded)
Nov 13 13:44:43 [15883] vm1       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:282   )   debug: do_dc_join_filter_offer: 	Processing req from vm3
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:341   )   debug: do_dc_join_filter_offer: 	join-2: Welcoming node vm3 (ref join_request-crmd-1384317883-5)
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm3[3232261519] - join-2 phase 1 -> 2
Nov 13 13:44:43 [15883] vm1       crmd: (membership.c:579   )    info: crm_update_peer_expected: 	do_dc_join_filter_offer: Node vm3[3232261519] - expected state is now member (was (null))
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:348   )   debug: do_dc_join_filter_offer: 	3 nodes have been integrated into join-2
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:593   )   debug: check_join_state: 	join-2: Integration of 3 peers complete: do_dc_join_filter_offer
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:590   )   debug: do_state_transition: 	All 3 cluster nodes responded to the join offer.
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Finalization Timer (I_ELECTION:1800000ms), src=29
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:372   )   debug: do_dc_join_finalize: 	Finializing join-2 for 3 clients
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:682   )    info: crmd_join_phase_log: 	join-2: vm3=integrated
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:682   )    info: crmd_join_phase_log: 	join-2: vm1=integrated
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:682   )    info: crmd_join_phase_log: 	join-2: vm2=integrated
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:410   )    info: do_dc_join_finalize: 	join-2: Syncing our CIB to the rest of the cluster
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:411   )   debug: do_dc_join_finalize: 	Requested version   <generation_tuple epoch="2" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:42 2013" update-origin="vm1" update-client="crmd"/>
Nov 13 13:44:43 [15878] vm1        cib: (  messages.c:435   )   debug: sync_our_cib: 	Syncing CIB to all peers
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/17, version=0.2.1)
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:610   )   debug: check_join_state: 	join-2: Still waiting on 3 integrated nodes
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:682   )   debug: crmd_join_phase_log: 	join-2: vm3=integrated
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:682   )   debug: crmd_join_phase_log: 	join-2: vm1=integrated
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:682   )   debug: crmd_join_phase_log: 	join-2: vm2=integrated
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:438   )   debug: finalize_sync_callback: 	Notifying 3 clients of join-2 results
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:562   )   debug: finalize_join_for: 	join-2: ACK'ing join request from vm3
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	finalize_join_for: Node vm3[3232261519] - join-2 phase 2 -> 3
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:562   )   debug: finalize_join_for: 	join-2: ACK'ing join request from vm1
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	finalize_join_for: Node vm1[3232261517] - join-2 phase 2 -> 3
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:562   )   debug: finalize_join_for: 	join-2: ACK'ing join request from vm2
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	finalize_join_for: Node vm2[3232261518] - join-2 phase 2 -> 3
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_modify op
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.3.1
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib admin_epoch="0" epoch="2" num_updates="1"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="3" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <nodes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <node id="3232261519" uname="vm3"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </nodes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [15878] vm1        cib: ( cib_utils.c:174   )  notice: log_cib_diff: 	cib:diff: Local-only Change: 0.3.1
Nov 13 13:44:43 [15878] vm1        cib: (       xml.c:1496  )  notice: cib:diff: 	-- <cib admin_epoch="0" epoch="2" num_updates="1"/>
Nov 13 13:44:43 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       <node id="3232261519" uname="vm3"/>
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/18, version=0.3.1)
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_modify op
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.4.1
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib admin_epoch="0" epoch="3" num_updates="1"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="4" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <nodes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <node id="3232261517" uname="vm1"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </nodes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [15878] vm1        cib: ( cib_utils.c:174   )  notice: log_cib_diff: 	cib:diff: Local-only Change: 0.4.1
Nov 13 13:44:43 [15878] vm1        cib: (       xml.c:1496  )  notice: cib:diff: 	-- <cib admin_epoch="0" epoch="3" num_updates="1"/>
Nov 13 13:44:43 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       <node id="3232261517" uname="vm1"/>
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.4.1)
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_modify op
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.5.1
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib admin_epoch="0" epoch="4" num_updates="1"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <nodes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <node id="3232261518" uname="vm2"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </nodes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [15883] vm1       crmd: (  messages.c:733   )   debug: handle_request: 	Raising I_JOIN_RESULT: join-2
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [15883] vm1       crmd: (join_client.:231   )   debug: do_cl_join_finalize_respond: 	Confirming join join-2: join_ack_nack
Nov 13 13:44:43 [15883] vm1       crmd: (join_client.:240   )   debug: do_cl_join_finalize_respond: 	join-2: Join complete.  Sending local LRM status to vm1
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:1011  )    info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:1032  )    info: update_attrd_helper: 	Connecting to attrd... 5 retries remaining
Nov 13 13:44:43 [15881] vm1      attrd: (      main.c:186   )   trace: attrd_ipc_accept: 	Connection 0x1ff0000
Nov 13 13:44:43 [15881] vm1      attrd: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1ff0000 for uid=496 gid=492 pid=15883 id=144c4790-c0bd-4d4b-b789-d7c2462cf699
Nov 13 13:44:43 [15881] vm1      attrd: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15881-15883-9)
Nov 13 13:44:43 [15881] vm1      attrd: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15883]
Nov 13 13:44:43 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [15878] vm1        cib: ( cib_utils.c:174   )  notice: log_cib_diff: 	cib:diff: Local-only Change: 0.5.1
Nov 13 13:44:43 [15878] vm1        cib: (       xml.c:1496  )  notice: cib:diff: 	-- <cib admin_epoch="0" epoch="4" num_updates="1"/>
Nov 13 13:44:43 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       <node id="3232261518" uname="vm2"/>
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/20, version=0.5.1)
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:902   )   debug: cib_process_xpath: 	//node_state[@uname='vm1']/transient_attributes was already removed
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/21, version=0.5.1)
Nov 13 13:44:43 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [15883] vm1       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: terminate=(null) for vm1
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: shutdown=(null) for vm1
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:482   )   debug: do_dc_join_ack: 	Ignoring op=join_ack_nack message from vm1
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:998   )   debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm1']/transient_attributes": OK (rc=0)
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_ack: Node vm3[3232261519] - join-2 phase 3 -> 4
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:504   )    info: do_dc_join_ack: 	join-2: Updating node state to member for vm3
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:1011  )    info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm3']/lrm
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:514   )   debug: do_dc_join_ack: 	join-2: Registered callback for LRM update 23
Nov 13 13:44:43 [15881] vm1      attrd: (      main.c:201   )   trace: attrd_ipc_created: 	Connection 0x1ff0000
Nov 13 13:44:43 [15881] vm1      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 15883 (0x1ff0000)
Nov 13 13:44:43 [15881] vm1      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="terminate" attr_section="status" attr_host="vm1" attr_is_remote="0"/>
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:227   )    info: attrd_client_message: 	Starting an election to determine the writer
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:172   )   debug: crm_uptime: 	Current CPU usage is: 0s, 16997us
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:902   )   debug: cib_process_xpath: 	//node_state[@uname='vm3']/lrm was already removed
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm3']/lrm: OK (rc=0, origin=local/crmd/22, version=0.5.1)
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:998   )   debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm3']/lrm": OK (rc=0)
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.1
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.2 96afa5ea158751708b6aaa2afbd9266e
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="1"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="2" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261519">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </node_state>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/23, version=0.5.2)
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-15881-33)
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15881]
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-1.raw
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.5.0 of the CIB to disk (digest: 630d79f602055b52fd2ea79fdbd1baf8)
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 630d79f602055b52fd2ea79fdbd1baf8 to disk
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.cazqnI (digest: /var/lib/pacemaker/cib/cib.gBFD48)
Nov 13 13:44:43 [15878] vm1        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.cazqnI
Nov 13 13:44:43 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [15881] vm1      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:43 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa7882622c0
Nov 13 13:44:43 [15881] vm1      attrd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:43 [15881] vm1      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-15865-15881-33-header
Nov 13 13:44:43 [15881] vm1      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-15865-15881-33-header
Nov 13 13:44:43 [15881] vm1      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-15865-15881-33-header
Nov 13 13:44:43 [15881] vm1      attrd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:43 [15881] vm1      attrd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:242   )   debug: election_vote: 	Started election 1
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting terminate[vm1] = (null)
Nov 13 13:44:43 [15881] vm1      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 15883 (0x1ff0000)
Nov 13 13:44:43 [15881] vm1      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="shutdown" attr_section="status" attr_host="vm1" attr_is_remote="0"/>
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting shutdown[vm1] = (null)
Nov 13 13:44:43 [15878] vm1        cib: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:902   )   debug: cib_process_xpath: 	//node_state[@uname='vm3']/transient_attributes was already removed
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm3']/transient_attributes: OK (rc=0, origin=vm3/crmd/10, version=0.5.2)
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-15881-33)
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-15881-33) state:2
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:43 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:43 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa7882622c0
Nov 13 13:44:43 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-15881-33-header
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-15881-33-header
Nov 13 13:44:43 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-15881-33-header
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_ack: Node vm1[3232261517] - join-2 phase 3 -> 4
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:504   )    info: do_dc_join_ack: 	join-2: Updating node state to member for vm1
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:1011  )    info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm1']/lrm
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:902   )   debug: cib_process_xpath: 	//node_state[@uname='vm1']/lrm was already removed
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/24, version=0.5.2)
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.2
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.3 dd02c1675f04ba6ab7d94c1f96067ad9
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="2"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="3" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261517">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </node_state>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/25, version=0.5.3)
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:514   )   debug: do_dc_join_ack: 	join-2: Registered callback for LRM update 25
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:998   )   debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm1']/lrm": OK (rc=0)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:485   )   debug: election_count_vote: 	Election 1 (current: 1, owner: 3232261517): Processed vote from vm1 (Recorded)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 2 non-votes (3 total)
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:456   )   debug: join_update_complete_callback: 	Join update 23 complete
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:615   )   debug: check_join_state: 	join-2: Still waiting on 1 finalized nodes
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:682   )   debug: crmd_join_phase_log: 	join-2: vm3=confirmed
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:682   )   debug: crmd_join_phase_log: 	join-2: vm1=confirmed
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:682   )   debug: crmd_join_phase_log: 	join-2: vm2=finalized
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute terminate with no delay
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting terminate[vm1] to (null) from vm1
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:459   )   trace: write_or_elect_attribute: 	Election in progress to determine who will write out terminate
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute shutdown with no delay
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting shutdown[vm1] to (null) from vm1
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:459   )   trace: write_or_elect_attribute: 	Election in progress to determine who will write out shutdown
Nov 13 13:44:43 [15878] vm1        cib: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:902   )   debug: cib_process_xpath: 	//node_state[@uname='vm2']/transient_attributes was already removed
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm2']/transient_attributes: OK (rc=0, origin=vm2/crmd/10, version=0.5.3)
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_ack: Node vm2[3232261518] - join-2 phase 3 -> 4
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:504   )    info: do_dc_join_ack: 	join-2: Updating node state to member for vm2
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:1011  )    info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm2']/lrm
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:514   )   debug: do_dc_join_ack: 	join-2: Registered callback for LRM update 27
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:902   )   debug: cib_process_xpath: 	//node_state[@uname='vm2']/lrm was already removed
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm2']/lrm: OK (rc=0, origin=local/crmd/26, version=0.5.3)
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.3
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.4 f4a55aa279b990bb05b0a588767e25f0
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="3"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="4" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261518">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </node_state>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:998   )   debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm2']/lrm": OK (rc=0)
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/27, version=0.5.4)
Nov 13 13:44:43 [15881] vm1      attrd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:200   )   debug: crm_compare_age: 	Win: 0.16997 vs 0.14997 (usec)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:490   )    info: election_count_vote: 	Election 1 (owner: 3232261519) pass: vote from vm3 (Uptime)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:242   )   debug: election_vote: 	Started election 2
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting terminate[vm3] to (null) from vm3
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:459   )   trace: write_or_elect_attribute: 	Election in progress to determine who will write out terminate
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting shutdown[vm3] to (null) from vm3
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:459   )   trace: write_or_elect_attribute: 	Election in progress to determine who will write out shutdown
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:456   )   debug: join_update_complete_callback: 	Join update 25 complete
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:619   )   debug: check_join_state: 	join-2 complete: join_update_complete_callback
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:634   )   debug: do_dc_join_final: 	Ensuring DC, quorum and node attributes are up-to-date
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: (null)=(null) for localhost
Nov 13 13:44:43 [15883] vm1       crmd: (membership.c:315   )   debug: crm_update_quorum: 	Updating quorum status to true (call=30)
Nov 13 13:44:43 [15883] vm1       crmd: (   tengine.c:150   )   debug: do_te_invoke: 	Cancelling the transition: inactive
Nov 13 13:44:43 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	do_te_invoke:151 - Triggered transition abort (complete=1) : Peer Cancelled
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Nov 13 13:44:43 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 31: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:44:43 [15881] vm1      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 15883 (0x1ff0000)
Nov 13 13:44:43 [15881] vm1      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="refresh" attr_section="status" attr_is_remote="0"/>
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/28, version=0.5.4)
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/29, version=0.5.4)
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.4
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.5 3118925a5f456f332e09aade04800ea0
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="4"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="5" num_updates="5" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:44:43 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.5.4 -> 0.5.5 (S_POLICY_ENGINE)
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/30, version=0.5.5)
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/31, version=0.5.5)
Nov 13 13:44:43 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=31, ref=pe_calc-dc-1384317883-15, seq=12, quorate=1
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:155   )   debug: unpack_config: 	On loss of CCM Quorum: Stop ALL resources
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:485   )   debug: election_count_vote: 	Election 2 (current: 2, owner: 3232261517): Processed vote from vm1 (Recorded)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 2 non-votes (3 total)
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:456   )   debug: join_update_complete_callback: 	Join update 27 complete
Nov 13 13:44:43 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:727   )   error: unpack_resources: 	Resource start-up disabled since no STONITH resources have been defined
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:728   )   error: unpack_resources: 	Either configure some or disable STONITH with the stonith-enabled option
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:729   )   error: unpack_resources: 	NOTE: Clusters with shared data need STONITH to ensure data integrity
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:44:43 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:44:43 [15882] vm1    pengine: (  allocate.c:1332  )  notice: stage6: 	Delaying fencing operations until there are resources to manage
Nov 13 13:44:43 [15882] vm1    pengine: (     utils.c:1216  )   debug: get_last_sequence: 	Series file /var/lib/pacemaker/pengine/pe-input.last does not exist
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:44:43 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 0: 3 actions in 3 synapses
Nov 13 13:44:43 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 0 (ref=pe_calc-dc-1384317883-15) derived from /var/lib/pacemaker/pengine/pe-input-0.bz2
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 2 non-votes (3 total)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:485   )   debug: election_count_vote: 	Election 2 (current: 2, owner: 3232261517): Processed no-vote from vm3 (Recorded)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:297   )   debug: election_check: 	Still waiting on 1 non-votes (3 total)
Nov 13 13:44:43 [15882] vm1    pengine: (   pengine.c:178   )  notice: process_pe_message: 	Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-0.bz2
Nov 13 13:44:43 [15882] vm1    pengine: (   pengine.c:183   )  notice: process_pe_message: 	Configuration ERRORs found during PE processing.  Please run "crm_verify -L" to identify issues.
Nov 13 13:44:43 [15881] vm1      attrd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.16997 vs 0.20996 (usec)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:511   )    info: election_count_vote: 	Election 1 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting terminate[vm2] to (null) from vm2
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out terminate, we are in state 2
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting shutdown[vm2] to (null) from vm2
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out shutdown, we are in state 2
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.16997 vs 0.20996 (usec)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:511   )    info: election_count_vote: 	Election 2 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.16997 vs 0.20996 (usec)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:511   )    info: election_count_vote: 	Election 3 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.16997 vs 0.20996 (usec)
Nov 13 13:44:43 [15881] vm1      attrd: (  election.c:511   )    info: election_count_vote: 	Election 4 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 [15883] vm1       crmd: (  throttle.c:259   )   debug: throttle_cib_load: 	Init 5 + 6 ticks at 1384317883 (100 tps)
Nov 13 13:44:43 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.020000 (full: 0.02 0.02 0.00 1/115 15893)
Nov 13 13:44:43 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:44:43 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 4: probe_complete probe_complete on vm3 - no waiting
Nov 13 13:44:43 [15883] vm1       crmd: (te_actions.c:454   )    info: te_rsc_command: 	Action 4 confirmed - no wait
Nov 13 13:44:43 [15883] vm1       crmd: (  throttle.c:259   )   debug: throttle_cib_load: 	Init 5 + 6 ticks at 1384317883 (100 tps)
Nov 13 13:44:43 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.020000 (full: 0.02 0.02 0.00 1/115 15893)
Nov 13 13:44:43 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:44:43 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 3: probe_complete probe_complete on vm2 - no waiting
Nov 13 13:44:43 [15883] vm1       crmd: (te_actions.c:454   )    info: te_rsc_command: 	Action 3 confirmed - no wait
Nov 13 13:44:43 [15883] vm1       crmd: (  throttle.c:259   )   debug: throttle_cib_load: 	Init 5 + 6 ticks at 1384317883 (100 tps)
Nov 13 13:44:43 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.020000 (full: 0.02 0.02 0.00 2/115 15893)
Nov 13 13:44:43 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:44:43 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 2: probe_complete probe_complete on vm1 (local) - no waiting
Nov 13 13:44:43 [15881] vm1      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 15883 (0x1ff0000)
Nov 13 13:44:43 [15881] vm1      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="probe_complete" attr_value="true" attr_section="status" attr_host="vm1" attr_is_remote="0"/>
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting probe_complete[vm1] = true
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: probe_complete=true for vm1
Nov 13 13:44:43 [15883] vm1       crmd: (te_actions.c:454   )    info: te_rsc_command: 	Action 2 confirmed - no wait
Nov 13 13:44:43 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 0 (Complete=0, Pending=0, Fired=3, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): In-progress
Nov 13 13:44:43 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 0 (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete
Nov 13 13:44:43 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 0 is now complete
Nov 13 13:44:43 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:44:43 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 0 status: done - <null>
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 13 13:44:43 [15883] vm1       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 13 13:44:43 [15883] vm1       crmd: (       fsa.c:645   )   debug: do_state_transition: 	Starting PEngine Recheck Timer
Nov 13 13:44:43 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=42
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:307   )  notice: attrd_peer_message: 	Processing sync-response from vm2
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged shutdown[vm1] from vm2 is (null)
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm1's node id now: 3232261517
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged shutdown[vm2] from vm2 is (null)
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm2's node id now: 3232261518
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged shutdown[vm3] from vm2 is (null)
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm3's node id now: 3232261519
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged terminate[vm1] from vm2 is (null)
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm1's node id now: 3232261517
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged terminate[vm2] from vm2 is (null)
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm2's node id now: 3232261518
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged terminate[vm3] from vm2 is (null)
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm3's node id now: 3232261519
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:368   )   debug: cib_process_modify: 	Destroying /cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:368   )   debug: cib_process_modify: 	Destroying /cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:368   )   debug: cib_process_modify: 	Destroying /cib/status/node_state[3]/transient_attributes/instance_attributes/nvpair
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.5
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.6 1eda82bbbd77cf8a880d3d455765d8d6
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="5"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="6" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <transient_attributes id="3232261519">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="status-3232261519"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </transient_attributes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <transient_attributes id="3232261517">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="status-3232261517"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </transient_attributes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <transient_attributes id="3232261518">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="status-3232261518"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </transient_attributes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/attrd/2, version=0.5.6)
Nov 13 13:44:43 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.5.5 -> 0.5.6 (S_IDLE)
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute probe_complete with no delay
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting probe_complete[vm3] to true from vm3
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out probe_complete, we are in state 2
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:368   )   debug: cib_process_modify: 	Destroying /cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:368   )   debug: cib_process_modify: 	Destroying /cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair
Nov 13 13:44:43 [15878] vm1        cib: (   cib_ops.c:368   )   debug: cib_process_modify: 	Destroying /cib/status/node_state[3]/transient_attributes/instance_attributes/nvpair
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/attrd/3, version=0.5.6)
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting probe_complete[vm1] to true from vm1
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out probe_complete, we are in state 2
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting probe_complete[vm2] to true from vm2
Nov 13 13:44:43 [15881] vm1      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out probe_complete, we are in state 2
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/attrd/4, version=0.5.7)
Nov 13 13:44:43 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.5.6 -> 0.5.7 (S_IDLE)
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.6
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.7 ff617ff8b610f67d2056a9b012bdfc03
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="6"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="7" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261519">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261519">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261519-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/attrd/5, version=0.5.8)
Nov 13 13:44:43 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.5.7 -> 0.5.8 (S_IDLE)
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.7
Nov 13 13:44:43 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.8 f2fe37326dd3c20276f6447b1667415b
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="7"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="8" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261517">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261517">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261517-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261518">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261518">
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261518-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 8s is 0.001250 (@100 tps)
Nov 13 13:44:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.010000 (full: 0.01 0.02 0.00 1/115 15893)
Nov 13 13:44:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:44:51 [15883] vm1       crmd: (  throttle.c:520   )   debug: throttle_timer_cb: 	New throttle mode: 0000 (was 0000)
Nov 13 13:44:51 [15883] vm1       crmd: (  throttle.c:499   )    info: throttle_send_command: 	Updated throttle state to 0000
Nov 13 13:44:51 [15883] vm1       crmd: (  throttle.c:651   )   debug: throttle_update: 	Host vm1 supports a maximum of 2 jobs and throttle mode 0000.  New job limit is 2
Nov 13 13:44:51 [15883] vm1       crmd: (  throttle.c:651   )   debug: throttle_update: 	Host vm2 supports a maximum of 2 jobs and throttle mode 0000.  New job limit is 2
Nov 13 13:44:52 [15883] vm1       crmd: (  throttle.c:651   )   debug: throttle_update: 	Host vm3 supports a maximum of 2 jobs and throttle mode 0000.  New job limit is 2
Nov 13 13:45:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:45:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.010000 (full: 0.01 0.02 0.00 1/115 15893)
Nov 13 13:45:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:45:33 [15878] vm1        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x15625a0 for uid=0 gid=0 pid=15903 id=12e28600-5dcd-41e5-a2c8-2462c7696cf8
Nov 13 13:45:33 [15878] vm1        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15878-15903-14)
Nov 13 13:45:33 [15878] vm1        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15903]
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.5.8)
Nov 13 13:45:33 [15878] vm1        cib: (      ipcs.c:757   )   debug: qb_ipcs_dispatch_connection_request: 	HUP conn (15878-15903-14)
Nov 13 13:45:33 [15878] vm1        cib: (      ipcs.c:605   )   debug: qb_ipcs_disconnect: 	qb_ipcs_disconnect(15878-15903-14) state:2
Nov 13 13:45:33 [15878] vm1        cib: (       ipc.c:361   )    info: crm_client_destroy: 	Destroying 0 events
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-15878-15903-14-header
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-15878-15903-14-header
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-15878-15903-14-header
Nov 13 13:45:33 [15878] vm1        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x15625a0 for uid=0 gid=0 pid=15904 id=b8c3d466-bf55-4b35-a3e8-6129e139d346
Nov 13 13:45:33 [15878] vm1        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15878-15904-14)
Nov 13 13:45:33 [15878] vm1        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15904]
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.5.8)
Nov 13 13:45:33 [15878] vm1        cib: (      ipcs.c:757   )   debug: qb_ipcs_dispatch_connection_request: 	HUP conn (15878-15904-14)
Nov 13 13:45:33 [15878] vm1        cib: (      ipcs.c:605   )   debug: qb_ipcs_disconnect: 	qb_ipcs_disconnect(15878-15904-14) state:2
Nov 13 13:45:33 [15878] vm1        cib: (       ipc.c:361   )    info: crm_client_destroy: 	Destroying 0 events
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-15878-15904-14-header
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-15878-15904-14-header
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-15878-15904-14-header
Nov 13 13:45:33 [15878] vm1        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x15625a0 for uid=0 gid=0 pid=15930 id=7c4f8a2a-0e8f-4fee-a29b-ec5d3557f7b8
Nov 13 13:45:33 [15878] vm1        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15878-15930-14)
Nov 13 13:45:33 [15878] vm1        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15930]
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_replace op
Nov 13 13:45:33 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_replace): 0.5.8 -> 0.6.1 (S_IDLE)
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:413   )    info: abort_transition_graph: 	te_update_diff:126 - Triggered transition abort (complete=1, node=, tag=diff, id=(null), magic=NA, cib=0.6.1) : Non-status change
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause   <diff crm_feature_set="3.0.8" digest="e65af88559035840dce69eaec2069fba">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause     <diff-removed admin_epoch="0" epoch="5" num_updates="8">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause       <cib admin_epoch="0" epoch="5" num_updates="8">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause         <configuration>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           <crm_config>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c" __crm_diff_marker__="removed:top"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync" __crm_diff_marker__="removed:top"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             </cluster_property_set>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           </crm_config>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause         </configuration>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause       </cib>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause     </diff-removed>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause     <diff-added>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause       <cib epoch="6" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="cibadmin" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause         <configuration>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           <crm_config>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <nvpair name="no-quorum-policy" value="freeze" id="cib-bootstrap-options-no-quorum-policy" __crm_diff_marker__="added:top"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled" __crm_diff_marker__="added:top"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <nvpair name="startup-fencing" value="false" id="cib-bootstrap-options-startup-fencing" __crm_diff_marker__="added:top"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <nvpair name="stonith-timeout" value="60s" id="cib-bootstrap-options-stonith-timeout" __crm_diff_marker__="added:top"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <nvpair name="crmd-transition-delay" value="2s" id="cib-bootstrap-options-crmd-transition-delay" __crm_diff_marker__="added:top"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             </cluster_property_set>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           </crm_config>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           <resources>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             <primitive id="F1" class="stonith" type="external/libvirt" __crm_diff_marker__="added:top">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <instance_attributes id="F1-instance_attributes">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause                 <nvpair name="hostlist" value="vm3" id="F1-instance_attributes-hostlist"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause                 <nvpair name="hypervisor_uri" value="qemu+ssh://bl460g1n6/system" id="F1-instance_attributes-hypervisor_uri"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               </instance_attributes>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <operations>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause                 <op name="start" interval="0s" timeout="60s" id="F1-start-0s"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause                 <op name="monitor" interval="3600s" timeout="60s" id="F1-monitor-3600s"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause                 <op name="stop" interval="0s" timeout="60s" id="F1-stop-0s"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               </operations>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             </primitive>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             <primitive id="pDummy" class="ocf" provider="pacemaker" type="Dummy" __crm_diff_marker__="added:top">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <operations>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause                 <op name="monitor" interval="10s" timeout="300s" on-fail="fence" id="pDummy-monitor-10s"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               </operations>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             </primitive>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           </resources>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           <constraints>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             <rsc_location id="l1" rsc="pDummy" __crm_diff_marker__="added:top">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <rule score="100" id="l1-rule">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause                 <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               </rule>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             </rsc_location>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             <rsc_location id="l2" rsc="F1" __crm_diff_marker__="added:top">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <rule score="100" id="l2-rule">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause                 <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               </rule>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <rule score="100" id="l2-rule-0">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause                 <expression attribute="#uname" operation="eq" value="vm2" id="l2-expression-0"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               </rule>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <rule score="-INFINITY" id="l2-rule-1">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause                 <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression-1"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               </rule>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             </rsc_location>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           </constraints>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           <fencing-topology __crm_diff_marker__="added:top">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             <fencing-level target="vm3" devices="F1" index="1" id="fencing"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           </fencing-topology>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           <rsc_defaults __crm_diff_marker__="added:top">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             <meta_attributes id="rsc-options">
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause               <nvpair name="migration-threshold" value="1" id="rsc-options-migration-threshold"/>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause             </meta_attributes>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause           </rsc_defaults>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause         </configuration>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause       </cib>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause     </diff-added>
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause   </diff>
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:45:33 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 32: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:45:33 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.8
Nov 13 13:45:33 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.6.1 e65af88559035840dce69eaec2069fba
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib admin_epoch="0" epoch="5" num_updates="8">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <configuration>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <crm_config>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        </cluster_property_set>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </crm_config>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </configuration>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="6" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="cibadmin" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="no-quorum-policy" value="freeze" id="cib-bootstrap-options-no-quorum-policy"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="startup-fencing" value="false" id="cib-bootstrap-options-startup-fencing"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="stonith-timeout" value="60s" id="cib-bootstrap-options-stonith-timeout"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="crmd-transition-delay" value="2s" id="cib-bootstrap-options-crmd-transition-delay"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <resources>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <primitive id="F1" class="stonith" type="external/libvirt">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="F1-instance_attributes">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair name="hostlist" value="vm3" id="F1-instance_attributes-hostlist"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair name="hypervisor_uri" value="qemu+ssh://bl460g1n6/system" id="F1-instance_attributes-hypervisor_uri"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </instance_attributes>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <operations>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="start" interval="0s" timeout="60s" id="F1-start-0s"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="monitor" interval="3600s" timeout="60s" id="F1-monitor-3600s"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="stop" interval="0s" timeout="60s" id="F1-stop-0s"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </operations>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </primitive>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <primitive id="pDummy" class="ocf" provider="pacemaker" type="Dummy">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <operations>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="monitor" interval="10s" timeout="300s" on-fail="fence" id="pDummy-monitor-10s"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </operations>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </primitive>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </resources>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <constraints>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <rsc_location id="l1" rsc="pDummy">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="100" id="l1-rule">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </rsc_location>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <rsc_location id="l2" rsc="F1">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="100" id="l2-rule">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="100" id="l2-rule-0">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm2" id="l2-expression-0"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="-INFINITY" id="l2-rule-1">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression-1"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </rsc_location>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </constraints>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <fencing-topology>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <fencing-level target="vm3" devices="F1" index="1" id="fencing"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </fencing-topology>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <rsc_defaults>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <meta_attributes id="rsc-options">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="migration-threshold" value="1" id="rsc-options-migration-threshold"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </meta_attributes>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </rsc_defaults>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:521   )   trace: register_fencing_topology: 	Updating vm3[1] (fencing) to F1
Nov 13 13:45:33 [15879] vm1 stonith-ng: (  commands.c:970   )    info: stonith_level_remove: 	Node vm3 not found (0 active entries)
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:45:33 [15879] vm1 stonith-ng: (  commands.c:937   )   trace: stonith_level_register: 	Added vm3 to the topology (1 active entries)
Nov 13 13:45:33 [15879] vm1 stonith-ng: (  commands.c:948   )   trace: stonith_level_register: 	Adding device 'F1' for vm3 (1)
Nov 13 13:45:33 [15879] vm1 stonith-ng: (  commands.c:952   )    info: stonith_level_register: 	Node vm3 has 1 active fencing levels
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   <rsc_location id="l1" rsc="pDummy" __crm_diff_marker__="added:top">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="100" id="l1-rule">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   </rsc_location>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   <rsc_location id="l2" rsc="F1" __crm_diff_marker__="added:top">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="100" id="l2-rule">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="100" id="l2-rule-0">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm2" id="l2-expression-0"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="-INFINITY" id="l2-rule-1">
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression-1"/>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   </rsc_location>
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:787   )   trace: update_cib_stonith_devices: 	Fencing resource F1 was added or modified
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:795   )    info: update_cib_stonith_devices: 	Updating device list from the cib: new resource
Nov 13 13:45:33 [15879] vm1 stonith-ng: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:45:33 [15879] vm1 stonith-ng: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:45:33 [15879] vm1 stonith-ng: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:45:33 [15879] vm1 stonith-ng: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:45:33 [15879] vm1 stonith-ng: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:45:33 [15879] vm1 stonith-ng: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:45:33 [15879] vm1 stonith-ng: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:45:33 [15878] vm1        cib: (    notify.c:384   )    info: cib_replace_notify: 	Replaced: 0.5.8 -> 0.6.1 from vm1
Nov 13 13:45:33 [15883] vm1       crmd: (       cib.c:110   )   debug: do_cib_replaced: 	Updating the CIB after a replace: DC=true
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:984   )    info: update_dc: 	Unset DC. Was vm1
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:172   )   debug: crm_uptime: 	Current CPU usage is: 0s, 53991us
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:242   )   debug: election_vote: 	Started election 3
Nov 13 13:45:33 [15881] vm1      attrd: (      main.c:110   )  notice: attrd_cib_replaced_cb: 	Updating all attributes after cib_refresh_notify event
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 3 (current: 3, owner: 3232261517): Processed vote from vm1 (Recorded)
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:297   )   debug: election_check: 	Still waiting on 2 non-votes (3 total)
Nov 13 13:45:33 [15879] vm1 stonith-ng: (    unpack.c:418   ) warning: handle_startup_fencing: 	Blind faith: not fencing unseen nodes
Nov 13 13:45:33 [15879] vm1 stonith-ng: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:45:33 [15878] vm1        cib: ( cib_utils.c:167   )  notice: cib:diff: 	Diff: --- 0.5.8
Nov 13 13:45:33 [15878] vm1        cib: ( cib_utils.c:169   )  notice: cib:diff: 	Diff: +++ 0.6.1 e65af88559035840dce69eaec2069fba
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1496  )  notice: cib:diff: 	--         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1496  )  notice: cib:diff: 	--         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair name="no-quorum-policy" value="freeze" id="cib-bootstrap-options-no-quorum-policy"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair name="startup-fencing" value="false" id="cib-bootstrap-options-startup-fencing"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair name="stonith-timeout" value="60s" id="cib-bootstrap-options-stonith-timeout"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair name="crmd-transition-delay" value="2s" id="cib-bootstrap-options-crmd-transition-delay"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       <primitive id="F1" class="stonith" type="external/libvirt">
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <instance_attributes id="F1-instance_attributes">
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++           <nvpair name="hostlist" value="vm3" id="F1-instance_attributes-hostlist"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++           <nvpair name="hypervisor_uri" value="qemu+ssh://bl460g1n6/system" id="F1-instance_attributes-hypervisor_uri"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         </instance_attributes>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <operations>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++           <op name="start" interval="0s" timeout="60s" id="F1-start-0s"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++           <op name="monitor" interval="3600s" timeout="60s" id="F1-monitor-3600s"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++           <op name="stop" interval="0s" timeout="60s" id="F1-stop-0s"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         </operations>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       </primitive>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       <primitive id="pDummy" class="ocf" provider="pacemaker" type="Dummy">
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <operations>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++           <op name="monitor" interval="10s" timeout="300s" on-fail="fence" id="pDummy-monitor-10s"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         </operations>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       </primitive>
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 3 (current: 3, owner: 3232261517): Processed no-vote from vm2 (Recorded)
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:297   )   debug: election_check: 	Still waiting on 1 non-votes (3 total)
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       <rsc_location id="l1" rsc="pDummy">
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <rule score="100" id="l1-rule">
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 3 (current: 3, owner: 3232261517): Processed no-vote from vm3 (Recorded)
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:50    )    info: election_timer_cb: 	Election election-0 complete
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:60    )    info: election_timeout_popped: 	Election failed: Declaring ourselves the winner
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_TIMER_POPPED origin=election_timeout_popped ]
Nov 13 13:45:33 [15883] vm1       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_ELECTION_DC from election_timeout_popped() received in state S_ELECTION
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]
Nov 13 13:45:33 [15883] vm1       crmd: (   tengine.c:98    )   debug: do_te_control: 	The transitioner is already active
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Integration Timer (I_INTEGRATED:180000ms), src=48
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:178   )    info: do_dc_takeover: 	Taking over DC status for this partition
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         </rule>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       </rsc_location>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       <rsc_location id="l2" rsc="F1">
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <rule score="100" id="l2-rule">
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++           <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         </rule>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <rule score="100" id="l2-rule-0">
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++           <expression attribute="#uname" operation="eq" value="vm2" id="l2-expression-0"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         </rule>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <rule score="-INFINITY" id="l2-rule-1">
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression-1"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         </rule>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       </rsc_location>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++     <fencing-topology>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       <fencing-level target="vm3" devices="F1" index="1" id="fencing"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++     </fencing-topology>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++     <rsc_defaults>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       <meta_attributes id="rsc-options">
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair name="migration-threshold" value="1" id="rsc-options-migration-threshold"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++       </meta_attributes>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++     </rsc_defaults>
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_replace operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.6.1)
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:666   )    info: cib_device_update: 	Device F1 is allowed on vm1: score=100
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:675   )   trace: cib_device_update: 	 hostlist=vm3
Nov 13 13:45:33 [15879] vm1 stonith-ng: (      main.c:675   )   trace: cib_device_update: 	 hypervisor_uri=qemu+ssh://bl460g1n6/system
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/32, version=0.6.1)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/33, version=0.6.1)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/34, version=0.6.1)
Nov 13 13:45:33 [15879] vm1 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action metadata for agent fence_legacy (target=(null))
Nov 13 13:45:33 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/35, version=0.6.1)
Nov 13 13:45:33 [15878] vm1        cib: (  messages.c:167   )   debug: cib_process_readwrite: 	We are still in R/W mode
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/36, version=0.6.1)
Nov 13 13:45:33 [15878] vm1        cib: (      ipcs.c:757   )   debug: qb_ipcs_dispatch_connection_request: 	HUP conn (15878-15930-14)
Nov 13 13:45:33 [15878] vm1        cib: (      ipcs.c:605   )   debug: qb_ipcs_disconnect: 	qb_ipcs_disconnect(15878-15930-14) state:2
Nov 13 13:45:33 [15878] vm1        cib: (       ipc.c:361   )    info: crm_client_destroy: 	Destroying 0 events
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-15878-15930-14-header
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-15878-15930-14-header
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-15878-15930-14-header
Nov 13 13:45:33 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/37, version=0.6.1)
Nov 13 13:45:33 [15878] vm1        cib: (   cib_ops.c:905   )   debug: cib_process_xpath: 	cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] does not exist
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: No such device or address (rc=-6, origin=local/crmd/38, version=0.6.1)
Nov 13 13:45:33 [15878] vm1        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_modify op
Nov 13 13:45:33 [15878] vm1        cib: ( cib_utils.c:174   )  notice: log_cib_diff: 	cib:diff: Local-only Change: 0.7.1
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1496  )  notice: cib:diff: 	-- <cib admin_epoch="0" epoch="6" num_updates="1"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/39, version=0.7.1)
Nov 13 13:45:33 [15878] vm1        cib: (   cib_ops.c:905   )   debug: cib_process_xpath: 	cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] does not exist
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: No such device or address (rc=-6, origin=local/crmd/40, version=0.7.1)
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:82    )   debug: initialize_join: 	join-3: Initializing join data (flag=true)
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:61    )    info: crm_update_peer_join: 	initialize_join: Node vm3[3232261519] - join-3 phase 4 -> 0
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:61    )    info: crm_update_peer_join: 	initialize_join: Node vm1[3232261517] - join-3 phase 4 -> 0
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:61    )    info: crm_update_peer_join: 	initialize_join: Node vm2[3232261518] - join-3 phase 4 -> 0
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-3: Sending offer to vm3
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm3[3232261519] - join-3 phase 0 -> 1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-3: Sending offer to vm1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm1[3232261517] - join-3 phase 0 -> 1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-3: Sending offer to vm2
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm2[3232261518] - join-3 phase 0 -> 1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:173   )    info: do_dc_join_offer_all: 	join-3: Waiting on 3 outstanding join acks
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_ELECTION_DC: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_election_check ]
Nov 13 13:45:33 [15883] vm1       crmd: (      misc.c:47    ) warning: do_log: 	FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:242   )   debug: election_vote: 	Started election 4
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:82    )   debug: initialize_join: 	join-4: Initializing join data (flag=true)
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:61    )    info: crm_update_peer_join: 	initialize_join: Node vm3[3232261519] - join-4 phase 1 -> 0
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:61    )    info: crm_update_peer_join: 	initialize_join: Node vm1[3232261517] - join-4 phase 1 -> 0
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:61    )    info: crm_update_peer_join: 	initialize_join: Node vm2[3232261518] - join-4 phase 1 -> 0
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-4: Sending offer to vm3
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm3[3232261519] - join-4 phase 0 -> 1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-4: Sending offer to vm1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm1[3232261517] - join-4 phase 0 -> 1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:140   )    info: join_make_offer: 	join-4: Sending offer to vm2
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	join_make_offer: Node vm2[3232261518] - join-4 phase 0 -> 1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:173   )    info: do_dc_join_offer_all: 	join-4: Waiting on 3 outstanding join acks
Nov 13 13:45:33 [15883] vm1       crmd: (   pengine.c:260   )   debug: do_pe_invoke_callback: 	Discarding PE request in state: S_INTEGRATION
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 33 : Parsing CIB options
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:45:33 [15883] vm1       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-3
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:981   )    info: update_dc: 	Set DC to vm1 (3.0.8)
Nov 13 13:45:33 [15883] vm1       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261517): Processed vote from vm1 (Recorded)
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261517): Processed no-vote from vm3 (Recorded)
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:45:33 [15883] vm1       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-4
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [15883] vm1       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:45:33 [15878] vm1        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_modify op
Nov 13 13:45:33 [15878] vm1        cib: ( cib_utils.c:174   )  notice: log_cib_diff: 	cib:diff: Local-only Change: 0.8.1
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1496  )  notice: cib:diff: 	-- <cib admin_epoch="0" epoch="7" num_updates="1"/>
Nov 13 13:45:33 [15878] vm1        cib: (       xml.c:1507  )  notice: cib:diff: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/41, version=0.8.1)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/42, version=0.8.1)
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 42 : Parsing CIB options
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/43, version=0.8.1)
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 43 : Parsing CIB options
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/44, version=0.8.1)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/45, version=0.8.1)
Nov 13 13:45:33 [15883] vm1       crmd: (join_client.:157   )   debug: join_query_callback: 	Respond to join offer join-4
Nov 13 13:45:33 [15883] vm1       crmd: (join_client.:158   )   debug: join_query_callback: 	Acknowledging vm1 as our DC
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/46, version=0.8.1)
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 46 : Parsing CIB options
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:45:33 [15883] vm1       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:485   )   debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261517): Processed no-vote from vm2 (Recorded)
Nov 13 13:45:33 [15883] vm1       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:282   )   debug: do_dc_join_filter_offer: 	Processing req from vm1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:341   )   debug: do_dc_join_filter_offer: 	join-4: Welcoming node vm1 (ref join_request-crmd-1384317933-28)
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm1[3232261517] - join-4 phase 1 -> 2
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:348   )   debug: do_dc_join_filter_offer: 	1 nodes have been integrated into join-4
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:354   )   debug: do_dc_join_filter_offer: 	join-4: Still waiting on 2 outstanding offers
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:282   )   debug: do_dc_join_filter_offer: 	Processing req from vm3
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:341   )   debug: do_dc_join_filter_offer: 	join-4: Welcoming node vm3 (ref join_request-crmd-1384317933-10)
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm3[3232261519] - join-4 phase 1 -> 2
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:348   )   debug: do_dc_join_filter_offer: 	2 nodes have been integrated into join-4
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:354   )   debug: do_dc_join_filter_offer: 	join-4: Still waiting on 1 outstanding offers
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:282   )   debug: do_dc_join_filter_offer: 	Processing req from vm2
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:341   )   debug: do_dc_join_filter_offer: 	join-4: Welcoming node vm2 (ref join_request-crmd-1384317933-9)
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm2[3232261518] - join-4 phase 1 -> 2
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:348   )   debug: do_dc_join_filter_offer: 	3 nodes have been integrated into join-4
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:593   )   debug: check_join_state: 	join-4: Integration of 3 peers complete: do_dc_join_filter_offer
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:590   )   debug: do_state_transition: 	All 3 cluster nodes responded to the join offer.
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Finalization Timer (I_ELECTION:1800000ms), src=56
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:372   )   debug: do_dc_join_finalize: 	Finializing join-4 for 3 clients
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:682   )    info: crmd_join_phase_log: 	join-4: vm3=integrated
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:682   )    info: crmd_join_phase_log: 	join-4: vm1=integrated
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:682   )    info: crmd_join_phase_log: 	join-4: vm2=integrated
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:410   )    info: do_dc_join_finalize: 	join-4: Syncing our CIB to the rest of the cluster
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:411   )   debug: do_dc_join_finalize: 	Requested version   <generation_tuple epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:45:33 [15878] vm1        cib: (  messages.c:435   )   debug: sync_our_cib: 	Syncing CIB to all peers
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/47, version=0.8.1)
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:610   )   debug: check_join_state: 	join-4: Still waiting on 3 integrated nodes
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:682   )   debug: crmd_join_phase_log: 	join-4: vm3=integrated
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:682   )   debug: crmd_join_phase_log: 	join-4: vm1=integrated
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:682   )   debug: crmd_join_phase_log: 	join-4: vm2=integrated
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:438   )   debug: finalize_sync_callback: 	Notifying 3 clients of join-4 results
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:562   )   debug: finalize_join_for: 	join-4: ACK'ing join request from vm3
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	finalize_join_for: Node vm3[3232261519] - join-4 phase 2 -> 3
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:562   )   debug: finalize_join_for: 	join-4: ACK'ing join request from vm1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	finalize_join_for: Node vm1[3232261517] - join-4 phase 2 -> 3
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:562   )   debug: finalize_join_for: 	join-4: ACK'ing join request from vm2
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	finalize_join_for: Node vm2[3232261518] - join-4 phase 2 -> 3
Nov 13 13:45:33 [15883] vm1       crmd: (  messages.c:733   )   debug: handle_request: 	Raising I_JOIN_RESULT: join-4
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [15883] vm1       crmd: (join_client.:231   )   debug: do_cl_join_finalize_respond: 	Confirming join join-4: join_ack_nack
Nov 13 13:45:33 [15883] vm1       crmd: (join_client.:240   )   debug: do_cl_join_finalize_respond: 	join-4: Join complete.  Sending local LRM status to vm1
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:482   )   debug: do_dc_join_ack: 	Ignoring op=join_ack_nack message from vm1
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_ack: Node vm3[3232261519] - join-4 phase 3 -> 4
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:504   )    info: do_dc_join_ack: 	join-4: Updating node state to member for vm3
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:1011  )    info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm3']/lrm
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:514   )   debug: do_dc_join_ack: 	join-4: Registered callback for LRM update 52
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_ack: Node vm1[3232261517] - join-4 phase 3 -> 4
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:504   )    info: do_dc_join_ack: 	join-4: Updating node state to member for vm1
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:1011  )    info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm1']/lrm
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:514   )   debug: do_dc_join_ack: 	join-4: Registered callback for LRM update 54
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/48, version=0.8.1)
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:66    )    info: crm_update_peer_join: 	do_dc_join_ack: Node vm2[3232261518] - join-4 phase 3 -> 4
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:504   )    info: do_dc_join_ack: 	join-4: Updating node state to member for vm2
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:1011  )    info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm2']/lrm
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:514   )   debug: do_dc_join_ack: 	join-4: Registered callback for LRM update 56
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/49, version=0.8.1)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/50, version=0.8.1)
Nov 13 13:45:33 [15878] vm1        cib: (   cib_ops.c:923   )   debug: cib_process_xpath: 	Processing cib_delete op for //node_state[@uname='vm3']/lrm (/cib/status/node_state[1]/lrm)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm3']/lrm: OK (rc=0, origin=local/crmd/51, version=0.8.2)
Nov 13 13:45:33 [15878] vm1        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-2.raw
Nov 13 13:45:33 [15878] vm1        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/52, version=0.8.3)
Nov 13 13:45:33 [15878] vm1        cib: (   cib_ops.c:923   )   debug: cib_process_xpath: 	Processing cib_delete op for //node_state[@uname='vm1']/lrm (/cib/status/node_state[2]/lrm)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/53, version=0.8.4)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/54, version=0.8.5)
Nov 13 13:45:33 [15878] vm1        cib: (   cib_ops.c:923   )   debug: cib_process_xpath: 	Processing cib_delete op for //node_state[@uname='vm2']/lrm (/cib/status/node_state[3]/lrm)
Nov 13 13:45:33 [15878] vm1        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.8.0 of the CIB to disk (digest: 9db35554f5ac4e48336f1bae33d89abc)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm2']/lrm: OK (rc=0, origin=local/crmd/55, version=0.8.6)
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:998   )   debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm3']/lrm": OK (rc=0)
Nov 13 13:45:33 [15878] vm1        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 9db35554f5ac4e48336f1bae33d89abc to disk
Nov 13 13:45:33 [15878] vm1        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.aXeGdJ (digest: /var/lib/pacemaker/cib/cib.0mEUho)
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:456   )   debug: join_update_complete_callback: 	Join update 52 complete
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:619   )   debug: check_join_state: 	join-4 complete: join_update_complete_callback
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Nov 13 13:45:33 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:634   )   debug: do_dc_join_final: 	Ensuring DC, quorum and node attributes are up-to-date
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: (null)=(null) for localhost
Nov 13 13:45:33 [15883] vm1       crmd: (membership.c:315   )   debug: crm_update_quorum: 	Updating quorum status to true (call=59)
Nov 13 13:45:33 [15883] vm1       crmd: (   tengine.c:150   )   debug: do_te_invoke: 	Cancelling the transition: inactive
Nov 13 13:45:33 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	do_te_invoke:151 - Triggered transition abort (complete=1) : Peer Cancelled
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=67
Nov 13 13:45:33 [15881] vm1      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 15883 (0x1ff0000)
Nov 13 13:45:33 [15881] vm1      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="refresh" attr_section="status" attr_is_remote="0"/>
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:998   )   debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm1']/lrm": OK (rc=0)
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:456   )   debug: join_update_complete_callback: 	Join update 54 complete
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Nov 13 13:45:33 [15878] vm1        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.aXeGdJ
Nov 13 13:45:33 [15878] vm1        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x156e4b0 for uid=0 gid=0 pid=15932 id=e6fecbad-5798-4b7d-bb03-c2062e3f6462
Nov 13 13:45:33 [15878] vm1        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15878-15932-14)
Nov 13 13:45:33 [15878] vm1        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15932]
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:45:33 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.6 -> 0.8.7 (S_POLICY_ENGINE)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/56, version=0.8.7)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/57, version=0.8.7)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/58, version=0.8.7)
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/59, version=0.8.7)
Nov 13 13:45:33 [15883] vm1       crmd: (     utils.c:998   )   debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm2']/lrm": OK (rc=0)
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:456   )   debug: join_update_complete_callback: 	Join update 56 complete
Nov 13 13:45:33 [15883] vm1       crmd: (   join_dc.c:579   )   debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Nov 13 13:45:33 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.8.7)
Nov 13 13:45:33 [15878] vm1        cib: (      ipcs.c:757   )   debug: qb_ipcs_dispatch_connection_request: 	HUP conn (15878-15932-14)
Nov 13 13:45:33 [15878] vm1        cib: (      ipcs.c:605   )   debug: qb_ipcs_disconnect: 	qb_ipcs_disconnect(15878-15932-14) state:2
Nov 13 13:45:33 [15878] vm1        cib: (       ipc.c:361   )    info: crm_client_destroy: 	Destroying 0 events
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-15878-15932-14-header
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-15878-15932-14-header
Nov 13 13:45:33 [15878] vm1        cib: (ringbuffer.c:299   )   debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-15878-15932-14-header
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( st_client.c:867   )   debug: internal_stonith_action_execute: 	result = 0
Nov 13 13:45:34 [15879] vm1 stonith-ng: (  commands.c:781   )   trace: device_has_duplicate: 	No match for F1
Nov 13 13:45:34 [15879] vm1 stonith-ng: (  commands.c:843   )  notice: stonith_device_register: 	Added 'F1' to the device list (1 active devices)
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.7.1
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib admin_epoch="0" epoch="6" num_updates="1"/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="7" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.8.1
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib admin_epoch="0" epoch="7" num_updates="1"/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.1
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.2 761ef3207c9a00ca3a190046e551df6b
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="1">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261519">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       <lrm id="3232261519">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <lrm_resources/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       </lrm>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="8" num_updates="2" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.2
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.3 70b3476c40002d2b8afe79070f45ed65
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="2"/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="3" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261519">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.3
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.4 6223769ad880e2cfd731d4ae34ea4603
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="3">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261517">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       <lrm id="3232261517">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <lrm_resources/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       </lrm>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="8" num_updates="4" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.4
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.5 ec848f93df58e6ea8292c890ceeba4d9
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="4"/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="5" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261517">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.5
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.6 b9a94f1abf0121139641067408b3dbe0
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="5">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261518">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       <lrm id="3232261518">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <lrm_resources/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       </lrm>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="8" num_updates="6" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.6
Nov 13 13:45:34 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.7 8afde5af943ecd2a85a00f392002038c
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="6"/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="7" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261518">
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:34 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:35 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:45:35 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:45:35 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 60: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:45:35 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/60, version=0.8.7)
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:45:35 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=60, ref=pe_calc-dc-1384317935-33, seq=12, quorate=1
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:45:35 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Stopped 
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	Stopped 
Nov 13 13:45:35 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:45:35 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm3 to pDummy
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:2511  )   debug: native_create_probe: 	Probing F1 on vm1 (Stopped)
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:2511  )   debug: native_create_probe: 	Probing pDummy on vm1 (Stopped)
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:2511  )   debug: native_create_probe: 	Probing F1 on vm2 (Stopped)
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:2511  )   debug: native_create_probe: 	Probing pDummy on vm2 (Stopped)
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:2511  )   debug: native_create_probe: 	Probing F1 on vm3 (Stopped)
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:2511  )   debug: native_create_probe: 	Probing pDummy on vm3 (Stopped)
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (3600s) for F1 on vm1
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm3
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:2084  )  notice: LogActions: 	Start   F1	(vm1)
Nov 13 13:45:35 [15882] vm1    pengine: (    native.c:2084  )  notice: LogActions: 	Start   pDummy	(vm3)
Nov 13 13:45:35 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:45:35 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:45:35 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 1: 14 actions in 14 synapses
Nov 13 13:45:35 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 1 (ref=pe_calc-dc-1384317935-33) derived from /var/lib/pacemaker/pengine/pe-input-1.bz2
Nov 13 13:45:35 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 10: monitor F1_monitor_0 on vm3
Nov 13 13:45:35 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 7: monitor F1_monitor_0 on vm2
Nov 13 13:45:35 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 4: monitor F1_monitor_0 on vm1 (local)
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1072  )    info: process_lrmd_get_rsc_info: 	Resource 'F1' not found (0 active resources)
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 75
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1047  )    info: process_lrmd_rsc_register: 	Added 'F1' to the rsc list (1 active resources)
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_register operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=0, reply=1, notify=1, exit=4201920
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 76
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (b88cb348-8886-42a5-bb3d-9ea70cadc946)
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 77
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [15883] vm1       crmd: (       lrm.c:1784  )    info: do_lrm_rsc_op: 	Performing key=4:1:7:154fb289-24e8-407e-9a03-69a510480b60 op=F1_monitor_0
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=5, reply=1, notify=0, exit=4201920
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 78
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:122   )   debug: log_execute: 	executing - rsc:F1 action:monitor call_id:5
Nov 13 13:45:35 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 11: monitor pDummy_monitor_0 on vm3
Nov 13 13:45:35 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 8: monitor pDummy_monitor_0 on vm2
Nov 13 13:45:35 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 5: monitor pDummy_monitor_0 on vm1 (local)
Nov 13 13:45:35 [15882] vm1    pengine: (   pengine.c:178   )  notice: process_pe_message: 	Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-1.bz2
Nov 13 13:45:35 [15879] vm1 stonith-ng: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1994cd0 for uid=0 gid=0 pid=15880 id=e2d2e964-db5b-4c6f-925f-8d0c9a6e6299
Nov 13 13:45:35 [15879] vm1 stonith-ng: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15879-15880-10)
Nov 13 13:45:35 [15879] vm1 stonith-ng: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [15880]
Nov 13 13:45:35 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [15879] vm1 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [15880] vm1       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [15880] vm1       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [15880] vm1       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [15879] vm1 stonith-ng: (      main.c:87    )   trace: st_ipc_created: 	Connection created for 0x1994cd0
Nov 13 13:45:35 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 1 from lrmd.15880
Nov 13 13:45:35 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command t="stonith-ng" st_op="register" st_clientname="lrmd.15880" st_clientid="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_clientnode="vm1"/>
Nov 13 13:45:35 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing register 1 from lrmd.15880 (               0)
Nov 13 13:45:35 [15880] vm1       lrmd: ( st_client.c:1639  )   debug: stonith_api_signon: 	Connection to STONITH successful
Nov 13 13:45:35 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed register from lrmd.15880: OK (0)
Nov 13 13:45:35 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 2 from lrmd.15880
Nov 13 13:45:35 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_set_notification st_op="st_notify" st_notify_activate="st_notify_disconnect" st_clientid="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_clientname="lrmd.15880" st_clientnode="vm1"/>
Nov 13 13:45:35 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify 2 from lrmd.15880 (               0)
Nov 13 13:45:35 [15879] vm1 stonith-ng: (  commands.c:1822  )   debug: handle_request: 	Setting st_notify_disconnect callbacks for lrmd.15880 (e2d2e964-db5b-4c6f-925f-8d0c9a6e6299): ON
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:F1 action:monitor call_id:5  exit-code:7 exec-time:23ms queue-time:0ms
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (b88cb348-8886-42a5-bb3d-9ea70cadc946)
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1072  )    info: process_lrmd_get_rsc_info: 	Resource 'pDummy' not found (1 active resources)
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 79
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1047  )    info: process_lrmd_rsc_register: 	Added 'pDummy' to the rsc list (2 active resources)
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_register operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=0, reply=1, notify=1, exit=4201920
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 80
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (b88cb348-8886-42a5-bb3d-9ea70cadc946)
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 81
Nov 13 13:45:35 [15883] vm1       crmd: (       lrm.c:1784  )    info: do_lrm_rsc_op: 	Performing key=5:1:7:154fb289-24e8-407e-9a03-69a510480b60 op=pDummy_monitor_0
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=9, reply=1, notify=0, exit=4201920
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 82
Nov 13 13:45:35 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 1 (Complete=0, Pending=6, Fired=6, Skipped=0, Incomplete=8, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Nov 13 13:45:35 [15883] vm1       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource F1 after monitor op complete (interval=0)
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:122   )   debug: log_execute: 	executing - rsc:pDummy action:monitor call_id:9
Nov 13 13:45:35 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify from lrmd.15880: OK (0)
Dummy(pDummy)[15934]:	2013/11/13_13:45:35 DEBUG: pDummy monitor : 7
Nov 13 13:45:35 [15880] vm1       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_0:15934 - exited with rc=7
Nov 13 13:45:35 [15880] vm1       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_0:15934:stderr [ -- empty -- ]
Nov 13 13:45:35 [15880] vm1       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_0:15934:stdout [ -- empty -- ]
Nov 13 13:45:35 [15880] vm1       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:9 pid:15934 exit-code:7 exec-time:121ms queue-time:2ms
Nov 13 13:45:35 [15880] vm1       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (b88cb348-8886-42a5-bb3d-9ea70cadc946)
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.7
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.8 26ed7a0b48fc4ae623a5aee9a3d14dcf
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="7"/>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="8" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261518">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="7:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;7:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="13" queue-time="0" op-digest="288
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/17, version=0.8.8)
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.8
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.9 a620ba287b7786990c988e5680eea772
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="8"/>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="9" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="4:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;4:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="23" queue-time="0" op-digest="288
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [15883] vm1       crmd: (       lrm.c:2101  )    info: process_lrm_event: 	LRM operation F1_monitor_0 (call=5, rc=7, cib-update=61, confirmed=true) not running
Nov 13 13:45:36 [15883] vm1       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'F1' with monitor op
Nov 13 13:45:36 [15883] vm1       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource pDummy after monitor op complete (interval=0)
Nov 13 13:45:36 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/61, version=0.8.9)
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.9
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.10 1bf63487cfb2465e5f9305b2b310410c
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="9"/>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="10" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="10:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;10:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="15" queue-time="0" op-digest="2
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/17, version=0.8.10)
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.10
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.11 66f83af99c165cdf0a74a520b9474f1b
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="10"/>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="11" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261518">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="8:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;8:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="120" queue-time="2" op-di
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/18, version=0.8.11)
Nov 13 13:45:36 [15883] vm1       crmd: (services_lin:604   )    info: services_os_action_execute: 	Managed Dummy_meta-data_0 process 15955 exited with rc=0
Nov 13 13:45:36 [15883] vm1       crmd: (       lrm.c:565   )   debug: get_rsc_restart_list: 	Attr state is not reloadable
Nov 13 13:45:36 [15883] vm1       crmd: (       lrm.c:565   )   debug: get_rsc_restart_list: 	Attr op_sleep is not reloadable
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.11
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.12 69b87249349fec166963d00574c1e8d9
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="11"/>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="12" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="5:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;5:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="121" queue-time="2" op-di
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/62, version=0.8.12)
Nov 13 13:45:36 [15883] vm1       crmd: (       lrm.c:2101  )  notice: process_lrm_event: 	LRM operation pDummy_monitor_0 (call=9, rc=7, cib-update=62, confirmed=true) not running
Nov 13 13:45:36 [15883] vm1       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'pDummy' with monitor op
Nov 13 13:45:36 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.7 -> 0.8.8 (S_TRANSITION_ENGINE)
Nov 13 13:45:36 [15883] vm1       crmd: ( te_events.c:375   )    info: match_graph_event: 	Action F1_monitor_0 (7) confirmed on vm2 (rc=0)
Nov 13 13:45:36 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.8 -> 0.8.9 (S_TRANSITION_ENGINE)
Nov 13 13:45:36 [15883] vm1       crmd: ( te_events.c:375   )    info: match_graph_event: 	Action F1_monitor_0 (4) confirmed on vm1 (rc=0)
Nov 13 13:45:36 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.9 -> 0.8.10 (S_TRANSITION_ENGINE)
Nov 13 13:45:36 [15883] vm1       crmd: ( te_events.c:375   )    info: match_graph_event: 	Action F1_monitor_0 (10) confirmed on vm3 (rc=0)
Nov 13 13:45:36 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.10 -> 0.8.11 (S_TRANSITION_ENGINE)
Nov 13 13:45:36 [15883] vm1       crmd: ( te_events.c:375   )    info: match_graph_event: 	Action pDummy_monitor_0 (8) confirmed on vm2 (rc=0)
Nov 13 13:45:36 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.11 -> 0.8.12 (S_TRANSITION_ENGINE)
Nov 13 13:45:36 [15883] vm1       crmd: ( te_events.c:375   )    info: match_graph_event: 	Action pDummy_monitor_0 (5) confirmed on vm1 (rc=0)
Nov 13 13:45:36 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 6: probe_complete probe_complete on vm2 - no waiting
Nov 13 13:45:36 [15883] vm1       crmd: (te_actions.c:454   )    info: te_rsc_command: 	Action 6 confirmed - no wait
Nov 13 13:45:36 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 3: probe_complete probe_complete on vm1 (local) - no waiting
Nov 13 13:45:36 [15881] vm1      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 15883 (0x1ff0000)
Nov 13 13:45:36 [15881] vm1      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="probe_complete" attr_value="true" attr_section="status" attr_host="vm1" attr_is_remote="0"/>
Nov 13 13:45:36 [15881] vm1      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting probe_complete[vm1] = true
Nov 13 13:45:36 [15883] vm1       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: probe_complete=true for vm1
Nov 13 13:45:36 [15883] vm1       crmd: (te_actions.c:454   )    info: te_rsc_command: 	Action 3 confirmed - no wait
Nov 13 13:45:36 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 1 (Complete=5, Pending=1, Fired=2, Skipped=0, Incomplete=6, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Nov 13 13:45:36 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 1 (Complete=7, Pending=1, Fired=0, Skipped=0, Incomplete=6, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Nov 13 13:45:36 [15881] vm1      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged probe_complete[vm1] from vm1 is true
Nov 13 13:45:36 [15881] vm1      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged probe_complete[vm2] from vm2 is true
Nov 13 13:45:36 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.12 -> 0.8.13 (S_TRANSITION_ENGINE)
Nov 13 13:45:36 [15883] vm1       crmd: ( te_events.c:375   )    info: match_graph_event: 	Action pDummy_monitor_0 (11) confirmed on vm3 (rc=0)
Nov 13 13:45:36 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 9: probe_complete probe_complete on vm3 - no waiting
Nov 13 13:45:36 [15883] vm1       crmd: (te_actions.c:454   )    info: te_rsc_command: 	Action 9 confirmed - no wait
Nov 13 13:45:36 [15883] vm1       crmd: (te_actions.c:55    )   debug: te_pseudo_action: 	Pseudo action 2 fired and confirmed
Nov 13 13:45:36 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 1 (Complete=8, Pending=0, Fired=2, Skipped=0, Incomplete=4, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Nov 13 13:45:36 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 12: start F1_start_0 on vm1 (local)
Nov 13 13:45:36 [15883] vm1       crmd: (       lrm.c:1780  )   debug: do_lrm_rsc_op: 	Stopped 0 recurring operations in preparation for F1_start_0
Nov 13 13:45:36 [15883] vm1       crmd: (       lrm.c:1784  )    info: do_lrm_rsc_op: 	Performing key=12:1:0:154fb289-24e8-407e-9a03-69a510480b60 op=F1_start_0
Nov 13 13:45:36 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=10, reply=1, notify=0, exit=4201920
Nov 13 13:45:36 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 86
Nov 13 13:45:36 [15880] vm1       lrmd: (      lrmd.c:122   )    info: log_execute: 	executing - rsc:F1 action:start call_id:10
Nov 13 13:45:36 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 14: start pDummy_start_0 on vm3
Nov 13 13:45:36 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 1 (Complete=10, Pending=2, Fired=2, Skipped=0, Incomplete=2, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Nov 13 13:45:36 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/4096 for command 3 from lrmd.15880
Nov 13 13:45:36 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_op="st_device_register" st_callid="2" st_callopt="4096" st_timeout="0" st_clientid="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_clientname="lrmd.15880" st_clientnode="vm1">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <st_device_id id="F1" origin="create_device_registration_xml" agent="fence_legacy" namespace="heartbeat">
Nov 13 13:45:36 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]         <attributes plugin="external/libvirt" CRM_meta_name="start" crm_feature_set="3.0.8" CRM_meta_timeout="60000" hostlist="vm3" hypervisor_uri="qemu+ssh://bl460g1n6/system"/>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       </st_device_id>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:45:36 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_device_register 3 from lrmd.15880 (            1000)
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action metadata for agent fence_legacy (target=(null))
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:45:36 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/18, version=0.8.13)
Nov 13 13:45:36 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:45:36 [15881] vm1      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged probe_complete[vm3] from vm3 is true
Nov 13 13:45:36 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.13 -> 0.8.14 (S_TRANSITION_ENGINE)
Nov 13 13:45:36 [15883] vm1       crmd: ( te_events.c:375   )    info: match_graph_event: 	Action pDummy_start_0 (14) confirmed on vm3 (rc=0)
Nov 13 13:45:36 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 15: monitor pDummy_monitor_10000 on vm3
Nov 13 13:45:36 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 1 (Complete=11, Pending=2, Fired=1, Skipped=0, Incomplete=1, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Nov 13 13:45:36 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/19, version=0.8.14)
Nov 13 13:45:36 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.14 -> 0.8.15 (S_TRANSITION_ENGINE)
Nov 13 13:45:36 [15883] vm1       crmd: ( te_events.c:375   )    info: match_graph_event: 	Action pDummy_monitor_10000 (15) confirmed on vm3 (rc=0)
Nov 13 13:45:36 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 1 (Complete=12, Pending=1, Fired=0, Skipped=0, Incomplete=1, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Nov 13 13:45:36 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/20, version=0.8.15)
Nov 13 13:45:37 [15879] vm1 stonith-ng: ( st_client.c:867   )   debug: internal_stonith_action_execute: 	result = 0
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:807   )   trace: device_has_duplicate: 	Match
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:820   )  notice: stonith_device_register: 	Device 'F1' already existed in device list (1 active devices)
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:1991  )   trace: handle_request: 	Reply handling: 0x1997590 3 3 1 4096 4096
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:259   )   trace: do_local_reply: 	Sending response 3 to lrmd.15880 
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_device_register from lrmd.15880: OK (0)
Nov 13 13:45:37 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.12
Nov 13 13:45:37 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.13 92a008f0ef62d500c66d49ae54262e4e
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="12"/>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="13" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="11:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;11:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="130" queue-time="2" op-
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:37 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.13
Nov 13 13:45:37 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.14 5fa7d2824d05d801e557350c4ebe869b
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="13">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261519">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        <lrm id="3232261519">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          <lrm_resources>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            <lrm_resource id="pDummy">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--             <lrm_rsc_op operation_key="pDummy_monitor_0" operation="monitor" transition-key="11:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;11:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" last-run="1384317935" last-rc-change="1384317935" exec-time="130" queue-time="2" id="pDummy_last_0"/>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            </lrm_resource>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          </lrm_resources>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        </lrm>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="14" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="14:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;14:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="10" rc-code="0" op-status="0" interval="0" last-run="1384317936" last-rc-change="1384317936" exec-time="51" queue-time="0" op-dige
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:37 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.14
Nov 13 13:45:37 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.15 c644ff6f35c372b7784ca430760c5a21
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="14"/>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="15" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_monitor_10000" operation_key="pDummy_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="15:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;15:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="11" rc-code="0" op-status="0" interval="10000" last-rc-change="1384317936" exec-time="48" queue-time="1" op-digest=
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 4 from lrmd.15880
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_op="st_execute" st_callid="3" st_callopt="0" st_timeout="60" st_clientid="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_clientname="lrmd.15880" st_clientnode="vm1">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <st_device_id origin="stonith_api_call" st_device_id="F1" st_device_action="monitor"/>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_execute 4 from lrmd.15880 (               0)
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:1002  )   trace: stonith_device_action: 	Looking for 'F1'
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_op="st_execute" st_callid="3" st_callopt="0" st_timeout="60" st_clientid="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_clientname="lrmd.15880" st_clientnode="vm1">
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command     <st_calldata>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command       <st_device_id origin="stonith_api_call" st_device_id="F1" st_device_action="monitor"/>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command     </st_calldata>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   </stonith_command>
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:288   )   debug: schedule_stonith_command: 	Scheduling monitor on F1 for e2d2e964-db5b-4c6f-925f-8d0c9a6e6299 (timeout=60s)
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_execute from lrmd.15880: Operation now in progress (-115)
Nov 13 13:45:37 [15879] vm1 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action monitor for agent fence_legacy (target=(null))
Nov 13 13:45:37 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:45:37 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:45:37 [15879] vm1 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation monitor on F1 now running with pid=15985, timeout=60s
Nov 13 13:45:38 [15879] vm1 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 15985 performing action 'monitor' exited with rc 0
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation monitor on F1 completed with rc=0 (0 remaining)
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:1353  )   trace: stonith_send_async_reply: 	Never broadcast monitor replies
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:1326  )   debug: log_operation: 	Operation 'monitor' [15985] for device 'F1' returned: 0 (OK)
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:1333  )    info: log_operation: 	F1:15985 [ Performing: stonith -t external/libvirt -S ]
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:1333  )    info: log_operation: 	F1:15985 [ success:  0 ]
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="stonith_construct_async_reply" t="stonith-ng" st_op="st_execute" st_device_id="F1" st_clientid="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_clientname="lrmd.15880" st_device_action="st_execute" st_callid="3" st_callopt="0" st_rc="0" st_output="Performing: stonith -t external/libvirt -S\nsuccess:  0\n"/>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:1374  )   trace: stonith_send_async_reply: 	Directed local a-sync reply to lrmd.15880
Nov 13 13:45:38 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:45:38 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:45:38 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to lrmd.15880 
Nov 13 13:45:38 [15880] vm1       lrmd: (      lrmd.c:104   )    info: log_finished: 	finished - rsc:F1 action:start call_id:10  exit-code:0 exec-time:2305ms queue-time:0ms
Nov 13 13:45:38 [15880] vm1       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (b88cb348-8886-42a5-bb3d-9ea70cadc946)
Nov 13 13:45:38 [15883] vm1       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource F1 after start op complete (interval=0)
Nov 13 13:45:38 [15883] vm1       crmd: (       lrm.c:2101  )  notice: process_lrm_event: 	LRM operation F1_start_0 (call=10, rc=0, cib-update=63, confirmed=true) ok
Nov 13 13:45:38 [15883] vm1       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'F1' with start op
Nov 13 13:45:38 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.15 -> 0.8.16 (S_TRANSITION_ENGINE)
Nov 13 13:45:38 [15883] vm1       crmd: ( te_events.c:375   )    info: match_graph_event: 	Action F1_start_0 (12) confirmed on vm1 (rc=0)
Nov 13 13:45:38 [15883] vm1       crmd: (te_actions.c:416   )  notice: te_rsc_command: 	Initiating action 13: monitor F1_monitor_3600000 on vm1 (local)
Nov 13 13:45:38 [15883] vm1       crmd: (       lrm.c:1784  )    info: do_lrm_rsc_op: 	Performing key=13:1:0:154fb289-24e8-407e-9a03-69a510480b60 op=F1_monitor_3600000
Nov 13 13:45:38 [15880] vm1       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from b88cb348-8886-42a5-bb3d-9ea70cadc946: rc=11, reply=1, notify=0, exit=4201920
Nov 13 13:45:38 [15880] vm1       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (b88cb348-8886-42a5-bb3d-9ea70cadc946) with msg id 88
Nov 13 13:45:38 [15880] vm1       lrmd: (      lrmd.c:122   )   debug: log_execute: 	executing - rsc:F1 action:monitor call_id:11
Nov 13 13:45:38 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/63, version=0.8.16)
Nov 13 13:45:38 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.15
Nov 13 13:45:38 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.16 d2057dbbd6d1a45d7f5bc3432ef649f3
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="15">
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261517">
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        <lrm id="3232261517">
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          <lrm_resources>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            <lrm_resource id="F1">
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--             <lrm_rsc_op operation_key="F1_monitor_0" operation="monitor" transition-key="4:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;4:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" last-run="1384317935" last-rc-change="1384317935" exec-time="23" id="F1_last_0"/>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            </lrm_resource>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          </lrm_resources>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        </lrm>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="16" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="12:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;12:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="10" rc-code="0" op-status="0" interval="0" last-run="1384317936" last-rc-change="1384317936" exec-time="2305" queue-time="0" op-digest="28
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:38 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 1 (Complete=13, Pending=1, Fired=1, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:45:38 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 5 from lrmd.15880
Nov 13 13:45:38 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_op="st_execute" st_callid="4" st_callopt="0" st_timeout="60" st_clientid="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_clientname="lrmd.15880" st_clientnode="vm1">
Nov 13 13:45:38 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <st_device_id origin="stonith_api_call" st_device_id="F1" st_device_action="monitor"/>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_execute 5 from lrmd.15880 (               0)
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:1002  )   trace: stonith_device_action: 	Looking for 'F1'
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_op="st_execute" st_callid="4" st_callopt="0" st_timeout="60" st_clientid="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_clientname="lrmd.15880" st_clientnode="vm1">
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command     <st_calldata>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command       <st_device_id origin="stonith_api_call" st_device_id="F1" st_device_action="monitor"/>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command     </st_calldata>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   </stonith_command>
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:288   )   debug: schedule_stonith_command: 	Scheduling monitor on F1 for e2d2e964-db5b-4c6f-925f-8d0c9a6e6299 (timeout=60s)
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_execute from lrmd.15880: Operation now in progress (-115)
Nov 13 13:45:38 [15879] vm1 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action monitor for agent fence_legacy (target=(null))
Nov 13 13:45:38 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:45:38 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:45:38 [15879] vm1 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation monitor on F1 now running with pid=16001, timeout=60s
Nov 13 13:45:39 [15879] vm1 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 16001 performing action 'monitor' exited with rc 0
Nov 13 13:45:39 [15879] vm1 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation monitor on F1 completed with rc=0 (0 remaining)
Nov 13 13:45:39 [15879] vm1 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:45:39 [15879] vm1 stonith-ng: (  commands.c:1353  )   trace: stonith_send_async_reply: 	Never broadcast monitor replies
Nov 13 13:45:39 [15879] vm1 stonith-ng: (  commands.c:1326  )   debug: log_operation: 	Operation 'monitor' [16001] for device 'F1' returned: 0 (OK)
Nov 13 13:45:39 [15879] vm1 stonith-ng: (  commands.c:1333  )    info: log_operation: 	F1:16001 [ Performing: stonith -t external/libvirt -S ]
Nov 13 13:45:39 [15879] vm1 stonith-ng: (  commands.c:1333  )    info: log_operation: 	F1:16001 [ success:  0 ]
Nov 13 13:45:39 [15879] vm1 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="stonith_construct_async_reply" t="stonith-ng" st_op="st_execute" st_device_id="F1" st_clientid="e2d2e964-db5b-4c6f-925f-8d0c9a6e6299" st_clientname="lrmd.15880" st_device_action="st_execute" st_callid="4" st_callopt="0" st_rc="0" st_output="Performing: stonith -t external/libvirt -S\nsuccess:  0\n"/>
Nov 13 13:45:39 [15879] vm1 stonith-ng: (  commands.c:1374  )   trace: stonith_send_async_reply: 	Directed local a-sync reply to lrmd.15880
Nov 13 13:45:39 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:45:39 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:45:39 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to lrmd.15880 
Nov 13 13:45:39 [15879] vm1 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:45:39 [15880] vm1       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:F1 action:monitor call_id:11  exit-code:0 exec-time:1312ms queue-time:0ms
Nov 13 13:45:39 [15880] vm1       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (b88cb348-8886-42a5-bb3d-9ea70cadc946)
Nov 13 13:45:39 [15883] vm1       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource F1 after monitor op complete (interval=3600000)
Nov 13 13:45:39 [15883] vm1       crmd: (       lrm.c:2101  )  notice: process_lrm_event: 	LRM operation F1_monitor_3600000 (call=11, rc=0, cib-update=64, confirmed=false) ok
Nov 13 13:45:39 [15883] vm1       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'F1' with monitor op
Nov 13 13:45:39 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.16 -> 0.8.17 (S_TRANSITION_ENGINE)
Nov 13 13:45:39 [15883] vm1       crmd: ( te_events.c:375   )    info: match_graph_event: 	Action F1_monitor_3600000 (13) confirmed on vm1 (rc=0)
Nov 13 13:45:39 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 1 (Complete=14, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Complete
Nov 13 13:45:39 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 1 is now complete
Nov 13 13:45:39 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:45:39 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 1 status: done - <null>
Nov 13 13:45:39 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 13 13:45:39 [15883] vm1       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Nov 13 13:45:39 [15883] vm1       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 13 13:45:39 [15883] vm1       crmd: (       fsa.c:645   )   debug: do_state_transition: 	Starting PEngine Recheck Timer
Nov 13 13:45:39 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=83
Nov 13 13:45:39 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.16
Nov 13 13:45:39 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.17 a97e80b9595cae69da19fce0899b09d9
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="16"/>
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="17" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_monitor_3600000" operation_key="F1_monitor_3600000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="13:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;13:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="11" rc-code="0" op-status="0" interval="3600000" last-rc-change="1384317938" exec-time="1312" queue-time="0" op-digest=
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:39 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:39 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/64, version=0.8.17)
Nov 13 13:45:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 9 ticks in 30s is 0.003000 (@100 tps)
Nov 13 13:45:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.130000 (full: 0.13 0.05 0.01 1/115 16016)
Nov 13 13:45:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:46:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:46:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.080000 (full: 0.08 0.04 0.01 1/115 16017)
Nov 13 13:46:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:46:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:46:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.050000 (full: 0.05 0.04 0.00 1/115 16024)
Nov 13 13:46:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:47:06 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.17 -> 0.8.18 (S_IDLE)
Nov 13 13:47:06 [15883] vm1       crmd: (  te_utils.c:413   )    info: abort_transition_graph: 	process_graph_event:583 - Triggered transition abort (complete=1, node=vm3, tag=lrm_rsc_op, id=pDummy_last_failure_0, magic=0:7;15:1:0:154fb289-24e8-407e-9a03-69a510480b60, cib=0.8.18) : Inactive graph
Nov 13 13:47:06 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=84
Nov 13 13:47:06 [15883] vm1       crmd: ( te_events.c:203   ) warning: update_failcount: 	Updating failcount for pDummy on vm3 after failed monitor: rc=7 (update=value++, time=1384318026)
Nov 13 13:47:06 [15881] vm1      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 15883 (0x1ff0000)
Nov 13 13:47:06 [15881] vm1      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="fail-count-pDummy" attr_value="value++" attr_section="status" attr_host="vm3" attr_is_remote="0"/>
Nov 13 13:47:06 [15881] vm1      attrd: (  commands.c:220   )    info: attrd_client_message: 	Expanded fail-count-pDummy=value++ to 1
Nov 13 13:47:06 [15881] vm1      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting fail-count-pDummy[vm3] = 1
Nov 13 13:47:06 [15883] vm1       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: fail-count-pDummy=value++ for vm3
Nov 13 13:47:06 [15883] vm1       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: last-failure-pDummy=1384318026 for vm3
Nov 13 13:47:06 [15883] vm1       crmd: ( te_events.c:601   )    info: process_graph_event: 	Detected action (1.15) pDummy_monitor_10000.11=not running: failed
Nov 13 13:47:06 [15881] vm1      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 15883 (0x1ff0000)
Nov 13 13:47:06 [15881] vm1      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="last-failure-pDummy" attr_value="1384318026" attr_section="status" attr_host="vm3" attr_is_remote="0"/>
Nov 13 13:47:06 [15881] vm1      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting last-failure-pDummy[vm3] = 1384318026
Nov 13 13:47:06 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.17
Nov 13 13:47:06 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.18 a51e1a3b91717c93641fe986a68f690b
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="17"/>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="18" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_failure_0" operation_key="pDummy_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="15:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;15:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="11" rc-code="7" op-status="0" interval="10000" last-rc-change="1384318026" exec-time="0" queue-time="0" op-digest=
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:47:06 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/21, version=0.8.18)
Nov 13 13:47:06 [15881] vm1      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute fail-count-pDummy with no delay
Nov 13 13:47:06 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting fail-count-pDummy[vm3] to 1 from vm1
Nov 13 13:47:06 [15881] vm1      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out fail-count-pDummy, we are in state 2
Nov 13 13:47:06 [15881] vm1      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute last-failure-pDummy with no delay
Nov 13 13:47:06 [15881] vm1      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting last-failure-pDummy[vm3] to 1384318026 from vm1
Nov 13 13:47:06 [15881] vm1      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out last-failure-pDummy, we are in state 2
Nov 13 13:47:06 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.18 -> 0.8.19 (S_IDLE)
Nov 13 13:47:06 [15883] vm1       crmd: (  te_utils.c:407   )    info: abort_transition_graph: 	te_update_diff:172 - Triggered transition abort (complete=1, node=vm3, tag=nvpair, id=status-3232261519-fail-count-pDummy, name=fail-count-pDummy, value=1, magic=NA, cib=0.8.19) : Transient attribute: update
Nov 13 13:47:06 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause   <nvpair id="status-3232261519-fail-count-pDummy" name="fail-count-pDummy" value="1" __crm_diff_marker__="added:top"/>
Nov 13 13:47:06 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=85
Nov 13 13:47:06 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.18
Nov 13 13:47:06 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.19 5197717035f45f7bbfddb1efd89c2360
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="18"/>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="19" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261519">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261519">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261519-fail-count-pDummy" name="fail-count-pDummy" value="1"/>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:47:06 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/attrd/6, version=0.8.19)
Nov 13 13:47:06 [15883] vm1       crmd: (te_callbacks:122   )   debug: te_update_diff: 	Processing diff (cib_modify): 0.8.19 -> 0.8.20 (S_IDLE)
Nov 13 13:47:06 [15883] vm1       crmd: (  te_utils.c:407   )    info: abort_transition_graph: 	te_update_diff:172 - Triggered transition abort (complete=1, node=vm3, tag=nvpair, id=status-3232261519-last-failure-pDummy, name=last-failure-pDummy, value=1384318026, magic=NA, cib=0.8.20) : Transient attribute: update
Nov 13 13:47:06 [15883] vm1       crmd: (  te_utils.c:444   )   debug: abort_transition_graph: 	Cause   <nvpair id="status-3232261519-last-failure-pDummy" name="last-failure-pDummy" value="1384318026" __crm_diff_marker__="added:top"/>
Nov 13 13:47:06 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=86
Nov 13 13:47:06 [15879] vm1 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.19
Nov 13 13:47:06 [15879] vm1 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.20 20d147ac83adce3d53784ce1a7e6304d
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="19"/>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="20" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261519">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261519">
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261519-last-failure-pDummy" name="last-failure-pDummy" value="1384318026"/>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:47:06 [15879] vm1 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:47:06 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/attrd/7, version=0.8.20)
Nov 13 13:47:08 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:47:08 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_IDLE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:08 [15883] vm1       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:08 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:47:08 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:47:08 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 65: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:47:08 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/65, version=0.8.20)
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:47:08 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=65, ref=pe_calc-dc-1384318028-47, seq=12, quorate=1
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:47:08 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:47:08 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:47:08 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:47:08 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:47:08 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:47:08 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:47:08 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:47:08 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:47:08 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:47:08 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:47:08 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:47:08 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:47:08 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:47:08 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:47:08 [15882] vm1    pengine: (     utils.c:1216  )   debug: get_last_sequence: 	Series file /var/lib/pacemaker/pengine/pe-warn.last does not exist
Nov 13 13:47:08 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:08 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:08 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 2: 6 actions in 6 synapses
Nov 13 13:47:08 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 2 (ref=pe_calc-dc-1384318028-47) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:08 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 94 from crmd.15883
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="2" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 94 from crmd.15883 (               0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=(nil)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 696fb2c3-e11a-4124-ba9b-bafc9ab28426
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: 696fb2c3-e11a-4124-ba9b-bafc9ab28426 - reboot of vm3 for crmd.15883
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.696fb2c3
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 696fb2c3-e11a-4124-ba9b-bafc9ab28426 (0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:47:08 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 2 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_op="st_query" st_callid="2" st_callopt="0" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	696fb2c3-e11a-4124-ba9b-bafc9ab28426 already exists
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_op="st_query" st_callid="2" st_callopt="0" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:288   )   debug: schedule_stonith_command: 	Scheduling list on F1 for stonith-ng (timeout=60s)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action list for agent fence_legacy (target=(null))
Nov 13 13:47:08 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:08 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 2: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:08 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation list on F1 now running with pid=16051, timeout=60s
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="2" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_callopt="0" src="vm3">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="2" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_callopt="0" src="vm2">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 2 of 3 from vm2 (1 devices)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: 696fb2c3-e11a-4124-ba9b-bafc9ab28426 0
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.696fb2c3
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 2
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.696fb2c3
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm2 (1 remaining)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 16051 performing action 'list' exited with rc 0
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:749   )    info: dynamic_list_search_cb: 	Refreshing port list for F1
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:409   )   trace: parse_host_line: 	Processing 3 bytes: [vm3]
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding 'vm3'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:409   )   trace: parse_host_line: 	Processing 11 bytes: [success:  0]
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding 'success'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding '0'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:480   )   trace: parse_host_list: 	Parsed 3 entries from 'vm3
success:  0
'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="2" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_callopt="0" src="vm1">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="2" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.696fb2c3 failed
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="1" src="vm1">
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@696fb2c3-e11a-4124-ba9b-bafc9ab28426.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm2 for crmd.15883@vm1.696fb2c3: Generic Pacemaker error
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:47:12 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 2/13:2:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:47:12 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 2 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:47:12 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:47:12 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:47:12 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:47:12 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 2 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:47:12 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 2 is now complete
Nov 13 13:47:12 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:47:12 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=90
Nov 13 13:47:12 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 2 status: restart - Stonith failed
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:47:12 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm2 for vm1: Generic Pacemaker error (ref=696fb2c3-e11a-4124-ba9b-bafc9ab28426) by client crmd.15883
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:14 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:47:14 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:14 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:14 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:47:14 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:47:14 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 66: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:47:14 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/66, version=0.8.20)
Nov 13 13:47:14 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=66, ref=pe_calc-dc-1384318034-48, seq=12, quorate=1
Nov 13 13:47:14 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:47:14 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:47:14 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:47:14 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:47:14 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:47:14 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:47:14 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:47:14 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:47:14 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:47:14 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:47:14 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:47:14 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:47:14 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:47:14 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:47:14 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:47:14 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:14 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:14 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 3: 6 actions in 6 synapses
Nov 13 13:47:14 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 3 (ref=pe_calc-dc-1384318034-48) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:14 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:47:14 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 3 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 97 from crmd.15883
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="3" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 97 from crmd.15883 (               0)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 431c7488-013e-4900-bde7-a3ce154b35a3
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: 431c7488-013e-4900-bde7-a3ce154b35a3 - reboot of vm3 for crmd.15883
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.431c7488
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 431c7488-013e-4900-bde7-a3ce154b35a3 (0)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:47:14 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 3: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="431c7488-013e-4900-bde7-a3ce154b35a3" st_op="st_query" st_callid="3" st_callopt="0" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	431c7488-013e-4900-bde7-a3ce154b35a3 already exists
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="431c7488-013e-4900-bde7-a3ce154b35a3" st_op="st_query" st_callid="3" st_callopt="0" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="3" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_callopt="0" src="vm2">
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 1 of 3 from vm2 (1 devices)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: 431c7488-013e-4900-bde7-a3ce154b35a3 0
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.431c7488
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 3
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.431c7488
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm2 (1 remaining)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="3" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_callopt="0" src="vm3">
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="3" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_callopt="0" src="vm1">
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:47:14 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:47:14 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="3" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:47:17 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Nov 13 13:47:17 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:47:17 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.431c7488 failed
Nov 13 13:47:17 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:47:17 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="2" src="vm1">
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:17 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:17 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@431c7488-013e-4900-bde7-a3ce154b35a3.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:17 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm2 for crmd.15883@vm1.431c7488: Generic Pacemaker error
Nov 13 13:47:17 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:17 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:47:17 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:17 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:47:17 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:47:17 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:17 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 3/13:3:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:47:17 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 3 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:47:17 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:47:17 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:47:17 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:47:17 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm2 for vm1: Generic Pacemaker error (ref=431c7488-013e-4900-bde7-a3ce154b35a3) by client crmd.15883
Nov 13 13:47:17 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 3 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:47:17 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 3 is now complete
Nov 13 13:47:17 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:47:17 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=94
Nov 13 13:47:17 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 3 status: restart - Stonith failed
Nov 13 13:47:19 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:47:19 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:19 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:19 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:47:19 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:47:19 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 67: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:47:19 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/67, version=0.8.20)
Nov 13 13:47:19 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:47:19 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=67, ref=pe_calc-dc-1384318039-49, seq=12, quorate=1
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:47:19 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:47:19 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:47:19 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:47:19 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:47:19 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:47:19 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:47:19 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:47:19 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:47:19 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:47:19 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:47:19 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:47:19 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:47:19 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:47:19 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:47:19 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:19 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:19 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 4: 6 actions in 6 synapses
Nov 13 13:47:19 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 4 (ref=pe_calc-dc-1384318039-49) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:19 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:47:19 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 4 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 100 from crmd.15883
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="4" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 100 from crmd.15883 (               0)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 682bdc12-35a4-431a-8773-4862cc8c39ef
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: 682bdc12-35a4-431a-8773-4862cc8c39ef - reboot of vm3 for crmd.15883
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.682bdc12
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 682bdc12-35a4-431a-8773-4862cc8c39ef (0)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:47:19 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 4: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="682bdc12-35a4-431a-8773-4862cc8c39ef" st_op="st_query" st_callid="4" st_callopt="0" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	682bdc12-35a4-431a-8773-4862cc8c39ef already exists
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="682bdc12-35a4-431a-8773-4862cc8c39ef" st_op="st_query" st_callid="4" st_callopt="0" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="4" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_callopt="0" src="vm2">
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 1 of 3 from vm2 (1 devices)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: 682bdc12-35a4-431a-8773-4862cc8c39ef 0
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.682bdc12
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 4
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.682bdc12
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm2 (1 remaining)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="4" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_callopt="0" src="vm3">
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="4" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_callopt="0" src="vm1">
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:47:19 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:47:19 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:47:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:47:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.030000 (full: 0.03 0.03 0.00 1/115 16064)
Nov 13 13:47:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="4" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:47:22 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Nov 13 13:47:22 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:47:22 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.682bdc12 failed
Nov 13 13:47:22 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:47:22 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="3" src="vm1">
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:22 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:22 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@682bdc12-35a4-431a-8773-4862cc8c39ef.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:22 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm2 for crmd.15883@vm1.682bdc12: Generic Pacemaker error
Nov 13 13:47:22 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:22 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:47:22 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:22 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:47:22 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:47:22 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:22 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 4/13:4:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:47:22 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 4 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:47:22 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:47:22 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:47:22 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:47:22 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm2 for vm1: Generic Pacemaker error (ref=682bdc12-35a4-431a-8773-4862cc8c39ef) by client crmd.15883
Nov 13 13:47:22 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 4 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:47:22 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 4 is now complete
Nov 13 13:47:22 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:47:22 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=98
Nov 13 13:47:22 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 4 status: restart - Stonith failed
Nov 13 13:47:24 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:47:24 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:24 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:24 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:47:24 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:47:24 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 68: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:47:24 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/68, version=0.8.20)
Nov 13 13:47:24 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:47:24 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=68, ref=pe_calc-dc-1384318044-50, seq=12, quorate=1
Nov 13 13:47:24 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:47:24 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:47:24 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:47:24 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:47:24 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:47:24 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:47:24 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:47:24 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:47:24 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:47:24 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:47:24 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:47:24 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:47:24 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:47:24 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:47:24 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:24 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:24 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 5: 6 actions in 6 synapses
Nov 13 13:47:24 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 5 (ref=pe_calc-dc-1384318044-50) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:24 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:47:24 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 5 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 103 from crmd.15883
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="5" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 103 from crmd.15883 (               0)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created d761e73f-f337-48cc-b2a1-5b2d722d2738
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: d761e73f-f337-48cc-b2a1-5b2d722d2738 - reboot of vm3 for crmd.15883
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.d761e73f
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: d761e73f-f337-48cc-b2a1-5b2d722d2738 (0)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:47:24 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 5: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_op="st_query" st_callid="5" st_callopt="0" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	d761e73f-f337-48cc-b2a1-5b2d722d2738 already exists
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_op="st_query" st_callid="5" st_callopt="0" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="5" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_callopt="0" src="vm2">
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 1 of 3 from vm2 (1 devices)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: d761e73f-f337-48cc-b2a1-5b2d722d2738 0
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.d761e73f
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 5
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.d761e73f
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm2 (1 remaining)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="5" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_callopt="0" src="vm3">
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="5" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_callopt="0" src="vm1">
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:47:24 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:47:24 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="5" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:47:27 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Nov 13 13:47:27 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:47:27 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.d761e73f failed
Nov 13 13:47:27 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:47:27 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="4" src="vm1">
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:27 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:27 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@d761e73f-f337-48cc-b2a1-5b2d722d2738.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:27 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm2 for crmd.15883@vm1.d761e73f: Generic Pacemaker error
Nov 13 13:47:27 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:27 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:47:27 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:27 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:47:27 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:47:27 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:27 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 5/13:5:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:47:27 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 5 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:47:27 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:47:27 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:47:27 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:47:27 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm2 for vm1: Generic Pacemaker error (ref=d761e73f-f337-48cc-b2a1-5b2d722d2738) by client crmd.15883
Nov 13 13:47:27 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 5 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:47:27 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 5 is now complete
Nov 13 13:47:27 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:47:27 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=102
Nov 13 13:47:27 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 5 status: restart - Stonith failed
Nov 13 13:47:29 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:47:29 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:29 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:29 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:47:29 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:47:29 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 69: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:47:29 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/69, version=0.8.20)
Nov 13 13:47:29 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:47:29 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:47:29 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:47:29 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=69, ref=pe_calc-dc-1384318049-51, seq=12, quorate=1
Nov 13 13:47:29 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:47:29 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:47:29 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:47:29 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:47:29 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:47:29 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:47:29 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:47:29 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:47:29 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:47:29 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:47:29 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:47:29 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:47:29 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:29 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:29 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 6: 6 actions in 6 synapses
Nov 13 13:47:29 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 6 (ref=pe_calc-dc-1384318049-51) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:29 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 106 from crmd.15883
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="6" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 106 from crmd.15883 (               0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 11df91ab-fc81-43aa-941d-ffa1204df1c9
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: 11df91ab-fc81-43aa-941d-ffa1204df1c9 - reboot of vm3 for crmd.15883
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.11df91ab
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 11df91ab-fc81-43aa-941d-ffa1204df1c9 (0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:47:29 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 6 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:47:29 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 6: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_op="st_query" st_callid="6" st_callopt="0" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	11df91ab-fc81-43aa-941d-ffa1204df1c9 already exists
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_op="st_query" st_callid="6" st_callopt="0" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="6" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_callopt="0" src="vm3">
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="6" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_callopt="0" src="vm1">
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 2 of 3 from vm1 (1 devices)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: 11df91ab-fc81-43aa-941d-ffa1204df1c9 0
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.11df91ab
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 6
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.11df91ab
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm1 (1 remaining)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm1 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="6" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_callopt="0" src="vm2">
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm2 (1 devices)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_op="st_fence" st_callid="6" st_callopt="0" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="v
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_op="st_fence" st_callid="6" st_callopt="0" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (11df91ab-fc81-43aa-941d-ffa1204df1c9) (timeout=60s)
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 13:47:29 [15879] vm1 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 13:47:29 [15879] vm1 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 13:47:29 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:29 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:29 [15879] vm1 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=16071, timeout=60s
Nov 13 13:47:30 vm1 stonith: [16072]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:30 [15879] vm1 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 16071 performing action 'reboot' exited with rc 1
Nov 13 13:47:30 [15879] vm1 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 13:47:30 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 13:47:30 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 13:47:33 vm1 stonith: [16084]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 13:47:33 [15879] vm1 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 16083 performing action 'reboot' exited with rc 1
Nov 13 13:47:33 [15879] vm1 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [16083] (call 6 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:16083 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:16083 [ failed: vm3 5 ]
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="6" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="6" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm1"/>
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm1 (               0)
Nov 13 13:47:33 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:47:33 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.11df91ab failed
Nov 13 13:47:33 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm1: OK (0)
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="5" src="vm1">
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm1" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:33 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@11df91ab-fc81-43aa-941d-ffa1204df1c9.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:33 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.11df91ab: Generic Pacemaker error
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:47:33 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 6/13:6:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:47:33 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 6 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:47:33 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:47:33 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:47:33 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:47:33 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 6 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:47:33 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 6 is now complete
Nov 13 13:47:33 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:47:33 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=106
Nov 13 13:47:33 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 6 status: restart - Stonith failed
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:33 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=11df91ab-fc81-43aa-941d-ffa1204df1c9) by client crmd.15883
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:47:33 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:33 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:47:33 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:47:33 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:35 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:47:35 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:35 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:35 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:47:35 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:47:35 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 70: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:47:35 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/70, version=0.8.20)
Nov 13 13:47:35 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:47:35 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:47:35 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:47:35 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=70, ref=pe_calc-dc-1384318055-52, seq=12, quorate=1
Nov 13 13:47:35 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:47:35 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:47:35 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:47:35 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:47:35 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:47:35 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:47:35 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:47:35 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:47:35 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:47:35 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:47:35 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:47:35 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:47:35 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:35 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:35 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 7: 6 actions in 6 synapses
Nov 13 13:47:35 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 7 (ref=pe_calc-dc-1384318055-52) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:35 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:47:35 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 7 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 109 from crmd.15883
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="7" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 109 from crmd.15883 (               0)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 84777767-aa8b-4e04-8dec-b26dae36aaff
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: 84777767-aa8b-4e04-8dec-b26dae36aaff - reboot of vm3 for crmd.15883
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.84777767
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 84777767-aa8b-4e04-8dec-b26dae36aaff (0)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:47:35 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 7: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="84777767-aa8b-4e04-8dec-b26dae36aaff" st_op="st_query" st_callid="7" st_callopt="0" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	84777767-aa8b-4e04-8dec-b26dae36aaff already exists
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="84777767-aa8b-4e04-8dec-b26dae36aaff" st_op="st_query" st_callid="7" st_callopt="0" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="7" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_callopt="0" src="vm2">
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 1 of 3 from vm2 (1 devices)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: 84777767-aa8b-4e04-8dec-b26dae36aaff 0
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.84777767
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 7
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.84777767
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm2 (1 remaining)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="7" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_callopt="0" src="vm3">
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="7" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_callopt="0" src="vm1">
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:47:35 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:47:35 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="7" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:47:38 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Nov 13 13:47:38 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:47:38 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.84777767 failed
Nov 13 13:47:38 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:47:38 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="6" src="vm1">
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:38 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:38 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@84777767-aa8b-4e04-8dec-b26dae36aaff.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:38 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm2 for crmd.15883@vm1.84777767: Generic Pacemaker error
Nov 13 13:47:38 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:38 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:47:38 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:38 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:47:38 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:47:38 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:38 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 7/13:7:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:47:38 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 7 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:47:38 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:47:38 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:47:38 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:47:38 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm2 for vm1: Generic Pacemaker error (ref=84777767-aa8b-4e04-8dec-b26dae36aaff) by client crmd.15883
Nov 13 13:47:38 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 7 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:47:38 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 7 is now complete
Nov 13 13:47:38 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:47:38 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=110
Nov 13 13:47:38 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 7 status: restart - Stonith failed
Nov 13 13:47:40 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:47:40 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:40 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:40 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:47:40 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:47:40 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 71: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:47:40 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/71, version=0.8.20)
Nov 13 13:47:40 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:47:40 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:47:40 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:47:40 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:47:40 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:47:40 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:47:40 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:47:40 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=71, ref=pe_calc-dc-1384318060-53, seq=12, quorate=1
Nov 13 13:47:40 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:47:40 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:47:40 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:47:40 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:47:40 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:47:40 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:47:40 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:47:40 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:47:40 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:40 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:40 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 8: 6 actions in 6 synapses
Nov 13 13:47:40 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 8 (ref=pe_calc-dc-1384318060-53) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:40 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 112 from crmd.15883
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="8" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 112 from crmd.15883 (               0)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: 588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27 - reboot of vm3 for crmd.15883
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.588ca7d3
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27 (0)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:47:40 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 8 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:47:40 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 8: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_op="st_query" st_callid="8" st_callopt="0" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27 already exists
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_op="st_query" st_callid="8" st_callopt="0" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="8" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_callopt="0" src="vm2">
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 1 of 3 from vm2 (1 devices)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: 588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27 0
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.588ca7d3
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 8
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.588ca7d3
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm2 (1 remaining)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="8" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_callopt="0" src="vm3">
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="8" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_callopt="0" src="vm1">
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:47:40 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:47:40 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="8" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:47:43 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Nov 13 13:47:43 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:47:43 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.588ca7d3 failed
Nov 13 13:47:43 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:47:43 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="7" src="vm1">
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:43 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:43 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:43 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm2 for crmd.15883@vm1.588ca7d3: Generic Pacemaker error
Nov 13 13:47:43 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:43 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:47:43 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:43 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:47:43 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:47:43 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:43 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 8/13:8:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:47:43 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 8 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:47:43 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:47:43 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:47:43 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:47:43 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm2 for vm1: Generic Pacemaker error (ref=588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27) by client crmd.15883
Nov 13 13:47:43 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 8 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:47:43 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 8 is now complete
Nov 13 13:47:43 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:47:43 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=114
Nov 13 13:47:43 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 8 status: restart - Stonith failed
Nov 13 13:47:45 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:47:45 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:45 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:45 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:47:45 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:47:45 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 72: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:47:45 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/72, version=0.8.20)
Nov 13 13:47:45 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:47:45 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=72, ref=pe_calc-dc-1384318065-54, seq=12, quorate=1
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:47:45 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:47:45 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:47:45 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:47:45 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:47:45 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:47:45 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:47:45 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:47:45 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:47:45 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:47:45 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:47:45 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:47:45 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:47:45 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:47:45 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:47:45 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:45 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:45 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 9: 6 actions in 6 synapses
Nov 13 13:47:45 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 9 (ref=pe_calc-dc-1384318065-54) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:45 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:47:45 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 9 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 115 from crmd.15883
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="9" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 115 from crmd.15883 (               0)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created a3379e0c-d206-4ced-9e7e-1c915f08a0ae
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: a3379e0c-d206-4ced-9e7e-1c915f08a0ae - reboot of vm3 for crmd.15883
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.a3379e0c
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: a3379e0c-d206-4ced-9e7e-1c915f08a0ae (0)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:47:45 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 9: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_op="st_query" st_callid="9" st_callopt="0" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	a3379e0c-d206-4ced-9e7e-1c915f08a0ae already exists
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_op="st_query" st_callid="9" st_callopt="0" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="9" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_callopt="0" src="vm2">
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 1 of 3 from vm2 (1 devices)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: a3379e0c-d206-4ced-9e7e-1c915f08a0ae 0
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.a3379e0c
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 9
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.a3379e0c
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm2 (1 remaining)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="9" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_callopt="0" src="vm3">
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="9" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_callopt="0" src="vm1">
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:47:45 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:47:45 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="9" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:47:48 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Nov 13 13:47:48 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:47:48 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.a3379e0c failed
Nov 13 13:47:48 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:47:48 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="8" src="vm1">
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:48 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:48 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@a3379e0c-d206-4ced-9e7e-1c915f08a0ae.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:48 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm2 for crmd.15883@vm1.a3379e0c: Generic Pacemaker error
Nov 13 13:47:48 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:48 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:47:48 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 9/13:9:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:47:48 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 9 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:47:48 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:47:48 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:47:48 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:47:48 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 9 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:47:48 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 9 is now complete
Nov 13 13:47:48 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:47:48 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=118
Nov 13 13:47:48 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 9 status: restart - Stonith failed
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:47:48 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:48 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:47:48 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:47:48 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm2 for vm1: Generic Pacemaker error (ref=a3379e0c-d206-4ced-9e7e-1c915f08a0ae) by client crmd.15883
Nov 13 13:47:48 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:50 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:47:50 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:50 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:50 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:47:50 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:47:50 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 73: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:47:50 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/73, version=0.8.20)
Nov 13 13:47:50 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:47:50 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:47:50 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:47:50 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=73, ref=pe_calc-dc-1384318070-55, seq=12, quorate=1
Nov 13 13:47:50 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:47:50 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:47:50 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:47:50 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:47:50 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:47:50 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:47:50 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:47:50 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:47:50 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:47:50 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:47:50 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:47:50 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:47:50 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:50 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:50 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 10: 6 actions in 6 synapses
Nov 13 13:47:50 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 10 (ref=pe_calc-dc-1384318070-55) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:50 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:47:50 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 10 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 118 from crmd.15883
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="10" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 118 from crmd.15883 (               0)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 9ab4c26b-da3e-40cd-ba98-c89017db4953
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: 9ab4c26b-da3e-40cd-ba98-c89017db4953 - reboot of vm3 for crmd.15883
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.9ab4c26b
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 9ab4c26b-da3e-40cd-ba98-c89017db4953 (0)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:47:50 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 10: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_op="st_query" st_callid="10" st_callopt="0" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	9ab4c26b-da3e-40cd-ba98-c89017db4953 already exists
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_op="st_query" st_callid="10" st_callopt="0" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="10" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_callopt="0" src="vm2">
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 1 of 3 from vm2 (1 devices)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: 9ab4c26b-da3e-40cd-ba98-c89017db4953 0
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.9ab4c26b
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 10
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.9ab4c26b
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm2 (1 remaining)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="10" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_callopt="0" src="vm3">
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="10" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_callopt="0" src="vm1">
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:47:50 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:47:50 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:47:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:47:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.020000 (full: 0.02 0.03 0.00 1/115 16094)
Nov 13 13:47:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="10" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:47:54 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Nov 13 13:47:54 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:47:54 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.9ab4c26b failed
Nov 13 13:47:54 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:47:54 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="9" src="vm1">
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:54 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:54 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@9ab4c26b-da3e-40cd-ba98-c89017db4953.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:54 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm2 for crmd.15883@vm1.9ab4c26b: Generic Pacemaker error
Nov 13 13:47:54 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:54 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:47:54 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:54 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:47:54 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:47:54 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:54 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 10/13:10:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:47:54 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 10 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:47:54 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:47:54 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:47:54 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:47:54 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm2 for vm1: Generic Pacemaker error (ref=9ab4c26b-da3e-40cd-ba98-c89017db4953) by client crmd.15883
Nov 13 13:47:54 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 10 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:47:54 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 10 is now complete
Nov 13 13:47:54 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:47:54 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=122
Nov 13 13:47:54 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 10 status: restart - Stonith failed
Nov 13 13:47:56 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:47:56 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:56 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:47:56 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:47:56 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:47:56 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 74: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:47:56 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/74, version=0.8.20)
Nov 13 13:47:56 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:47:56 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:47:56 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:47:56 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:47:56 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:47:56 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:47:56 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:47:56 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=74, ref=pe_calc-dc-1384318076-56, seq=12, quorate=1
Nov 13 13:47:56 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:47:56 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:47:56 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:47:56 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:47:56 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:47:56 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:47:56 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:47:56 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:47:56 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:56 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:47:56 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 11: 6 actions in 6 synapses
Nov 13 13:47:56 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 11 (ref=pe_calc-dc-1384318076-56) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:56 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 121 from crmd.15883
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="11" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 121 from crmd.15883 (               0)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 1ba836f2-328d-45c7-adbb-1db9b0a1ca4c
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: 1ba836f2-328d-45c7-adbb-1db9b0a1ca4c - reboot of vm3 for crmd.15883
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.1ba836f2
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 1ba836f2-328d-45c7-adbb-1db9b0a1ca4c (0)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:47:56 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 11 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:47:56 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 11: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_op="st_query" st_callid="11" st_callopt="0" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	1ba836f2-328d-45c7-adbb-1db9b0a1ca4c already exists
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_op="st_query" st_callid="11" st_callopt="0" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="11" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_callopt="0" src="vm2">
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 1 of 3 from vm2 (1 devices)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: 1ba836f2-328d-45c7-adbb-1db9b0a1ca4c 0
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.1ba836f2
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 11
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.1ba836f2
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm2 (1 remaining)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="11" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_callopt="0" src="vm3">
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="11" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_callopt="0" src="vm1">
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:47:56 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:47:56 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="11" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:47:59 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Nov 13 13:47:59 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:47:59 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.1ba836f2 failed
Nov 13 13:47:59 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:47:59 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="10" src="vm1">
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:59 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:59 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@1ba836f2-328d-45c7-adbb-1db9b0a1ca4c.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:59 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm2 for crmd.15883@vm1.1ba836f2: Generic Pacemaker error
Nov 13 13:47:59 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:59 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:47:59 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:59 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:47:59 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:47:59 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:59 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 11/13:11:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:47:59 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 11 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:47:59 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:47:59 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:47:59 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:47:59 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm2 for vm1: Generic Pacemaker error (ref=1ba836f2-328d-45c7-adbb-1db9b0a1ca4c) by client crmd.15883
Nov 13 13:47:59 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 11 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:47:59 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 11 is now complete
Nov 13 13:47:59 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:47:59 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=126
Nov 13 13:47:59 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 11 status: restart - Stonith failed
Nov 13 13:48:01 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Nov 13 13:48:01 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:48:01 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 13:48:01 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 13:48:01 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 13:48:01 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 75: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 13:48:01 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/75, version=0.8.20)
Nov 13 13:48:01 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 13:48:01 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 13:48:01 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=75, ref=pe_calc-dc-1384318081-57, seq=12, quorate=1
Nov 13 13:48:01 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 13:48:01 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 13:48:01 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 13:48:01 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 13:48:01 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 13:48:01 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 13:48:01 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 13:48:01 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 13:48:01 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 13:48:01 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 13:48:01 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 13:48:01 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 13:48:01 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 13:48:01 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:48:01 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 13:48:01 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 12: 6 actions in 6 synapses
Nov 13 13:48:01 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 12 (ref=pe_calc-dc-1384318081-57) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:48:01 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 124 from crmd.15883
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="12" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 124 from crmd.15883 (               0)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 00825b71-24e3-4f14-a0b8-6945f050dfd1
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: 00825b71-24e3-4f14-a0b8-6945f050dfd1 - reboot of vm3 for crmd.15883
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.00825b71
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 00825b71-24e3-4f14-a0b8-6945f050dfd1 (0)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 13:48:01 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 12 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_op="st_query" st_callid="12" st_callopt="0" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	00825b71-24e3-4f14-a0b8-6945f050dfd1 already exists
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_op="st_query" st_callid="12" st_callopt="0" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:1118  )  notice: can_fence_host_with_device: 	F1 can fence vm3: dynamic-list
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="12" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_callopt="0" src="vm2">
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 1 of 3 from vm2 (1 devices)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: 00825b71-24e3-4f14-a0b8-6945f050dfd1 0
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.00825b71
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 12
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.00825b71
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm2 (1 remaining)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="12" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_callopt="0" src="vm3">
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 13:48:01 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 12: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="12" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_callopt="0" src="vm1">
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:48:01 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 13:48:01 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="12" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:48:04 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Nov 13 13:48:04 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 13:48:04 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.00825b71 failed
Nov 13 13:48:04 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 13:48:04 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="11" src="vm1">
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:48:04 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:48:04 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@00825b71-24e3-4f14-a0b8-6945f050dfd1.vm1: Generic Pacemaker error (-201)
Nov 13 13:48:04 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm2 for crmd.15883@vm1.00825b71: Generic Pacemaker error
Nov 13 13:48:04 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:48:04 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 13:48:04 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 12/13:12:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 13:48:04 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 12 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 13:48:04 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 13:48:04 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 13:48:04 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 13:48:04 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 12 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 13:48:04 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 12 is now complete
Nov 13 13:48:04 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 13:48:04 [15883] vm1       crmd: (te_callbacks:346   )  notice: too_many_st_failures: 	Too many failures to fence vm3 (11), giving up
Nov 13 13:48:04 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 12 status: restart - Stonith failed
Nov 13 13:48:04 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 13 13:48:04 [15883] vm1       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Nov 13 13:48:04 [15883] vm1       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 13 13:48:04 [15883] vm1       crmd: (       fsa.c:645   )   debug: do_state_transition: 	Starting PEngine Recheck Timer
Nov 13 13:48:04 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=130
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 13:48:04 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:48:04 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 13:48:04 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 13:48:04 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:48:04 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm2 for vm1: Generic Pacemaker error (ref=00825b71-24e3-4f14-a0b8-6945f050dfd1) by client crmd.15883
Nov 13 13:48:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:48:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.010000 (full: 0.01 0.02 0.00 1/115 16145)
Nov 13 13:48:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:48:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:48:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.160000 (full: 0.16 0.05 0.01 1/115 16176)
Nov 13 13:48:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:49:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:49:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.100000 (full: 0.10 0.05 0.01 1/115 16199)
Nov 13 13:49:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:49:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:49:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.060000 (full: 0.06 0.04 0.00 1/115 16206)
Nov 13 13:49:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:50:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:50:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.030000 (full: 0.03 0.04 0.00 1/115 16208)
Nov 13 13:50:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:50:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:50:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.020000 (full: 0.02 0.03 0.00 1/115 16215)
Nov 13 13:50:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:51:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:51:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.010000 (full: 0.01 0.03 0.00 1/115 16216)
Nov 13 13:51:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:51:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:51:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.010000 (full: 0.01 0.02 0.00 1/115 16223)
Nov 13 13:51:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:52:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:52:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.02 0.00 1/116 16225)
Nov 13 13:52:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:52:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:52:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.02 0.00 2/116 16231)
Nov 13 13:52:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:53:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:53:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.02 0.00 1/116 16232)
Nov 13 13:53:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:53:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:53:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.01 0.00 1/116 16239)
Nov 13 13:53:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:54:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:54:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.01 0.00 1/116 16240)
Nov 13 13:54:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:54:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:54:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.01 0.00 1/116 16247)
Nov 13 13:54:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:55:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:55:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16247)
Nov 13 13:55:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:55:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:55:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16254)
Nov 13 13:55:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:56:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:56:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16255)
Nov 13 13:56:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:56:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:56:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16262)
Nov 13 13:56:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:57:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:57:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16263)
Nov 13 13:57:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:57:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:57:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16269)
Nov 13 13:57:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:58:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:58:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16270)
Nov 13 13:58:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:58:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:58:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16277)
Nov 13 13:58:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:59:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:59:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16278)
Nov 13 13:59:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:59:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:59:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16285)
Nov 13 13:59:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:00:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:00:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16287)
Nov 13 14:00:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:00:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:00:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16294)
Nov 13 14:00:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:01:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:01:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16307)
Nov 13 14:01:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:01:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:01:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16314)
Nov 13 14:01:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:02:21 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:02:21 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16315)
Nov 13 14:02:21 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:02:51 [15883] vm1       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:02:51 [15883] vm1       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/116 16321)
Nov 13 14:02:51 [15883] vm1       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:03:04 [15883] vm1       crmd: (     utils.c:120   )    info: crm_timer_popped: 	PEngine Recheck Timer (I_PE_CALC) just popped (900000ms)
Nov 13 14:03:04 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_IDLE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 14:03:04 [15883] vm1       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Nov 13 14:03:04 [15883] vm1       crmd: (       fsa.c:599   )    info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Nov 13 14:03:04 [15883] vm1       crmd: (       fsa.c:610   )   debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Nov 13 14:03:04 [15883] vm1       crmd: (   pengine.c:231   )   debug: do_pe_invoke: 	Query 76: Requesting the current CIB: S_POLICY_ENGINE
Nov 13 14:03:04 [15878] vm1        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/76, version=0.8.20)
Nov 13 14:03:04 [15882] vm1    pengine: (   pengine.c:116   )    info: process_pe_message: 	Input has not changed since last time, not saving to disk
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 14:03:04 [15883] vm1       crmd: (   pengine.c:299   )   debug: do_pe_invoke_callback: 	Invoking the PE: query=76, ref=pe_calc-dc-1384318984-58, seq=12, quorate=1
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm3 is active
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm3 is online
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm1 is active
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm1 is online
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:1220  )    info: determine_online_status_fencing: 	Node vm2 is active
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:1336  )    info: determine_online_status: 	Node vm2 is online
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:2456  )   debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:2334  ) warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Nov 13 14:03:04 [15882] vm1    pengine: (    unpack.c:64    ) warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Nov 13 14:03:04 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	F1	(stonith:external/libvirt):	Started vm1 
Nov 13 14:03:04 [15882] vm1    pengine: (    native.c:446   )    info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Nov 13 14:03:04 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource F1: preferring current location (node=vm1, weight=1000000)
Nov 13 14:03:04 [15882] vm1    pengine: (  allocate.c:593   )   debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Nov 13 14:03:04 [15882] vm1    pengine: (     utils.c:1497  )    info: get_failcount_full: 	pDummy has failed 1 times on vm3
Nov 13 14:03:04 [15882] vm1    pengine: (  allocate.c:622   ) warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Nov 13 14:03:04 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm1 to F1
Nov 13 14:03:04 [15882] vm1    pengine: (     utils.c:386   )   debug: native_assign_node: 	Assigning vm2 to pDummy
Nov 13 14:03:04 [15882] vm1    pengine: (    native.c:788   )    info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm2
Nov 13 14:03:04 [15882] vm1    pengine: (  allocate.c:1346  ) warning: stage6: 	Scheduling Node vm3 for STONITH
Nov 13 14:03:04 [15882] vm1    pengine: (    native.c:2585  )  notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Nov 13 14:03:04 [15882] vm1    pengine: (    native.c:1987  )    info: LogActions: 	Leave   F1	(Started vm1)
Nov 13 14:03:04 [15882] vm1    pengine: (    native.c:1996  )  notice: LogActions: 	Recover pDummy	(Started vm3 -> vm2)
Nov 13 14:03:04 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 14:03:04 [15883] vm1       crmd: (       fsa.c:502   )    info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 13 14:03:04 [15883] vm1       crmd: (    unpack.c:230   )   debug: unpack_graph: 	Unpacked transition 13: 6 actions in 6 synapses
Nov 13 14:03:04 [15883] vm1       crmd: (   tengine.c:208   )    info: do_te_invoke: 	Processing graph 13 (ref=pe_calc-dc-1384318984-58) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 14:03:04 [15883] vm1       crmd: (te_actions.c:140   )  notice: te_fence_node: 	Executing reboot fencing operation (13) on vm3 (timeout=60000)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 0/0 for command 127 from crmd.15883
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence" st_callid="13" st_callopt="0" st_timeout="60" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     <st_calldata>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]       <stonith_api_fence st_target="vm3" st_device_action="reboot" st_timeout="60" st_tolerance="0"/>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]     </st_calldata>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   </stonith_command>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 127 from crmd.15883 (               0)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice: handle_request: 	Client crmd.15883.e6cedc3f wants to fence (reboot) 'vm3' with device '(any)'
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace: stonith_check_fence_tolerance: 	tolerance=0, remote_op_list=0x193c5e0
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Generated new stonith op: 893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b - reboot of vm3 for crmd.15883
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:437   )   trace: stonith_topology_next: 	Attempting fencing level 1 for vm3 (1 devices) - crmd.15883@vm1.893bcd8c
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:689   )  notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b (0)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from crmd.15883: Operation now in progress (-115)
Nov 13 14:03:04 [15883] vm1       crmd: (     graph.c:336   )   debug: run_graph: 	Transition 13 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Nov 13 14:03:04 [15882] vm1    pengine: (   pengine.c:175   ) warning: process_pe_message: 	Calculated Transition 13: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_op="st_query" st_callid="13" st_callopt="0" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:576   )   debug: create_remote_stonith_op: 	893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b already exists
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_op="st_query" st_callid="13" st_callopt="0" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug: get_capable_devices: 	Searching through 1 devices to see what is capable of action (reboot) for target vm3
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:288   )   debug: schedule_stonith_command: 	Scheduling list on F1 for stonith-ng (timeout=60s)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 14:03:04 [15879] vm1 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action list for agent fence_legacy (target=(null))
Nov 13 14:03:04 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 14:03:04 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation list on F1 now running with pid=16323, timeout=60s
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="13" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_callopt="0" src="vm3">
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="0"/>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace: process_remote_stonith_query: 	Query result from vm3 (0 devices)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Nov 13 14:03:04 [15879] vm1 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 16323 performing action 'list' exited with rc 0
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:749   )    info: dynamic_list_search_cb: 	Refreshing port list for F1
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:409   )   trace: parse_host_line: 	Processing 3 bytes: [vm3]
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding 'vm3'
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:409   )   trace: parse_host_line: 	Processing 11 bytes: [success:  0]
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding 'success'
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:436   )   trace: parse_host_line: 	Adding '0'
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:480   )   trace: parse_host_list: 	Parsed 3 entries from 'vm3
success:  0
'
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug: search_devices_record_result: 	Finished Search. 1 devices can perform action (reboot) on node vm3
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 1 matching devices for 'vm3'
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="13" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_callopt="0" src="vm1">
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 2 of 3 from vm1 (1 devices)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace: process_remote_stonith_query: 	Peer vm1 has confirmed a verified device F1
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace: process_remote_stonith_query: 	All topology devices found
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:943   )   trace: call_remote_stonith: 	State for vm3.crmd.158: 893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b 0
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:904   )   trace: report_timeout_period: 	Reporting timeout for crmd.15883.893bcd8c
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:343   )   trace: do_stonith_async_timeout_update: 	timeout update is 72 for client e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 13
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:955   )    info: call_remote_stonith: 	Total remote op timeout set to 60 for fencing of node vm3 for crmd.15883.893bcd8c
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:786   )   trace: stonith_choose_peer: 	Checking for someone to fence vm3 with F1
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:757   )   trace: find_best_peer: 	Removing F1 from vm1 (1 remaining)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:984   )    info: call_remote_stonith: 	Requesting that vm1 perform op reboot vm3 with F1 for crmd.15883 (72s)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_op="st_fence" st_callid="13" st_callopt="0" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:165   )   trace: create_async_command: 	Command   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_op="st_fence" st_callid="13" st_callopt="0" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" st_device_id="F1" st_mode="slave" src="vm1"/>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:285   )   debug: schedule_stonith_command: 	Scheduling reboot on F1 for remote peer vm1 with op id (893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b) (timeout=60s)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Nov 13 14:03:04 [15879] vm1 stonith-ng: ( st_client.c:576   )    info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Nov 13 14:03:04 [15879] vm1 stonith-ng: ( st_client.c:476   )   debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Nov 13 14:03:04 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 14:03:04 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:246   )   debug: stonith_device_execute: 	Operation reboot for node vm3 on F1 now running with pid=16336, timeout=60s
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0" st_op="st_query" st_callid="13" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_callopt="0" src="vm2">
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <stonith_query_capable_device_cb st_target="vm3" st-available-devices="1">
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]         <st_device_id id="F1" namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       </stonith_query_capable_device_cb>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (    remote.c:1154  )    info: process_remote_stonith_query: 	Query result 3 of 3 from vm2 (1 devices)
Nov 13 14:03:04 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Nov 13 14:03:05 vm1 stonith: [16337]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 14:03:05 [15879] vm1 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 16336 performing action 'reboot' exited with rc 1
Nov 13 14:03:05 [15879] vm1 stonith-ng: ( st_client.c:719   )    info: internal_stonith_action_execute: 	Attempt 2 to execute fence_legacy (reboot). remaining timeout is 59
Nov 13 14:03:05 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug: internal_stonith_action_execute: 	forking
Nov 13 14:03:05 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug: internal_stonith_action_execute: 	sending args
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (15865-16409-33)
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [16409]
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 14:03:06 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 14:03:06 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7fa788262300
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (15865-16409-33)
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(15865-16409-33) state:2
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 14:03:06 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 14:03:06 [15863] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7fa788262300
Nov 13 14:03:06 [15863] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-15865-16409-33-header
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-15865-16409-33-header
Nov 13 14:03:06 [15863] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-15865-16409-33-header
Nov 13 14:03:07 vm1 stonith: [16391]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Nov 13 14:03:07 [15879] vm1 stonith-ng: ( st_client.c:678   )   debug: stonith_action_async_done: 	Child process 16348 performing action 'reboot' exited with rc 1
Nov 13 14:03:07 [15879] vm1 stonith-ng: ( st_client.c:641   )    info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:1463  )   trace: st_child_done: 	Operation reboot on F1 completed with rc=1 (0 remaining)
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:1677  )   trace: stonith_construct_async_reply: 	Creating a basic reply
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:1321  )   error: log_operation: 	Operation 'reboot' [16348] (call 13 from crmd.15883) for host 'vm3' with device 'F1' returned: -201 (Generic Pacemaker error)
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:16348 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:1333  ) warning: log_operation: 	F1:16348 [ failed: vm3 5 ]
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:1362  )   trace: stonith_send_async_reply: 	Reply   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="13" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n"/>
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:1369  )   trace: stonith_send_async_reply: 	Directed reply to vm1
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:213   )   trace: stonith_device_execute: 	Nothing further to do for F1
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply st_origin="vm1" t="stonith-ng" st_op="st_fence" st_device_id="F1" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence" st_callid="13" st_callopt="0" st_rc="-201" st_output="Performing: stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm1"/>
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_fence reply 0 from vm1 (               0)
Nov 13 14:03:07 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice: process_remote_stonith_exec: 	Call to F1 for vm3 on behalf of crmd.15883@vm1: Generic Pacemaker error (-201)
Nov 13 14:03:07 [15879] vm1 stonith-ng: (    remote.c:443   )  notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.15883@vm1.893bcd8c failed
Nov 13 14:03:07 [15879] vm1 stonith-ng: (    remote.c:160   )   trace: bcast_result_to_peers: 	Broadcasting result to peers
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_fence reply from vm1: OK (0)
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="12" src="vm1">
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm1" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 14:03:07 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b.vm1: Generic Pacemaker error (-201)
Nov 13 14:03:07 [15879] vm1 stonith-ng: (    remote.c:297   )   error: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.893bcd8c: Generic Pacemaker error
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:263   )   trace: do_local_reply: 	Sending an event to crmd.15883 
Nov 13 14:03:07 [15883] vm1       crmd: (te_callbacks:411   )  notice: tengine_stonith_callback: 	Stonith operation 13/13:13:0:154fb289-24e8-407e-9a03-69a510480b60: Generic Pacemaker error (-201)
Nov 13 14:03:07 [15883] vm1       crmd: (te_callbacks:462   )  notice: tengine_stonith_callback: 	Stonith operation 13 for vm3 failed (Generic Pacemaker error): aborting transition.
Nov 13 14:03:07 [15883] vm1       crmd: (  te_utils.c:425   )    info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Nov 13 14:03:07 [15883] vm1       crmd: (     utils.c:271   )   debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Nov 13 14:03:07 [15883] vm1       crmd: (     utils.c:281   )   debug: update_abort_priority: 	Abort action done superceeded by restart
Nov 13 14:03:07 [15883] vm1       crmd: (     graph.c:336   )  notice: run_graph: 	Transition 13 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Nov 13 14:03:07 [15883] vm1       crmd: (  te_utils.c:349   )   debug: te_graph_trigger: 	Transition 13 is now complete
Nov 13 14:03:07 [15883] vm1       crmd: (te_actions.c:654   )   debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Nov 13 14:03:07 [15883] vm1       crmd: (te_callbacks:346   )  notice: too_many_st_failures: 	Too many failures to fence vm3 (12), giving up
Nov 13 14:03:07 [15883] vm1       crmd: (te_actions.c:699   )   debug: notify_crmd: 	Transition 13 status: restart - Stonith failed
Nov 13 14:03:07 [15883] vm1       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 13 14:03:07 [15883] vm1       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Nov 13 14:03:07 [15883] vm1       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 13 14:03:07 [15883] vm1       crmd: (       fsa.c:645   )   debug: do_state_transition: 	Starting PEngine Recheck Timer
Nov 13 14:03:07 [15883] vm1       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=134
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 14:03:07 [15883] vm1       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b) by client crmd.15883
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.15883.e6cedc
Nov 13 14:03:07 [15879] vm1 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 14:03:07 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm2
Nov 13 14:03:07 [15879] vm1 stonith-ng: (    remote.c:77    )   trace: free_remote_query: 	Free'ing query result from vm1
Nov 13 14:03:07 [15879] vm1 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 14:03:16 [15878] vm1        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x16104a0 for uid=0 gid=0 pid=17477 id=41ad2846-b251-4248-ba10-3fc2b7a69936
Nov 13 14:03:16 [15878] vm1        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (15878-17477-14)
