Nov 13 13:44:17 [450] vm3 corosync notice  [MAIN  ] main.c:main:1171 Corosync Cluster Engine ('2.3.2.7-a911'): started and ready to provide service.
Nov 13 13:44:17 [450] vm3 corosync info    [MAIN  ] main.c:main:1172 Corosync built-in features: watchdog upstart snmp pie relro bindnow
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:901 Token Timeout (1000 ms) retransmit timeout (238 ms)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:904 token hold (180 ms) retransmits before loss (4 retrans)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:911 join (50 ms) send_join (0 ms) consensus (1200 ms) merge (200 ms)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:914 downcheck (1000 ms) fail to recv const (2500 msgs)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:916 seqno unchanged const (30 rotations) Maximum network MTU 1401
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:920 window size per rotation (50 messages) maximum messages per rotation (17 messages)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:924 missed count const (5 messages)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:927 send threads (0 threads)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:930 RRP token expired timeout (238 ms)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:933 RRP token problem counter (10000 ms)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:936 RRP threshold (10 problem count)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:939 RRP multicast threshold (100 problem count)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:942 RRP automatic recovery check timeout (1000 ms)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:944 RRP mode set to active.
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:947 heartbeat_failures_allowed (0)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:949 max_network_delay (50 ms)
Nov 13 13:44:17 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:972 HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Nov 13 13:44:17 [450] vm3 corosync notice  [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Nov 13 13:44:17 [450] vm3 corosync notice  [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Nov 13 13:44:17 [450] vm3 corosync notice  [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Nov 13 13:44:17 [450] vm3 corosync notice  [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:905 Receive multicast socket recv buffer size (320000 bytes).
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:911 Transmit multicast socket send buffer size (320000 bytes).
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:917 Local receive multicast loop socket recv buffer size (320000 bytes).
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:923 Local transmit multicast loop socket send buffer size (320000 bytes).
Nov 13 13:44:18 [450] vm3 corosync notice  [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.101.143] is now up.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:main_iface_change_fn:4637 Created or loaded sequence id 0.192.168.101.143 for this ring.
Nov 13 13:44:18 [450] vm3 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration map access [0]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cmap [0]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:18 [450] vm3 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cmap
Nov 13 13:44:18 [450] vm3 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration service [1]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cfg [1]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:18 [450] vm3 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cfg
Nov 13 13:44:18 [450] vm3 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster closed process group service v1.01 [2]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cpg [2]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:18 [450] vm3 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cpg
Nov 13 13:44:18 [450] vm3 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync profile loading service [4]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:851 NOT Initializing IPC on pload [4]
Nov 13 13:44:18 [450] vm3 corosync warning [WD    ] wd.c:setup_watchdog:631 No Watchdog, try modprobe <a watchdog>
Nov 13 13:44:18 [450] vm3 corosync info    [WD    ] wd.c:wd_scan_resources:580 no resources configured.
Nov 13 13:44:18 [450] vm3 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync watchdog service [7]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:851 NOT Initializing IPC on wd [7]
Nov 13 13:44:18 [450] vm3 corosync notice  [QUORUM] vsf_quorum.c:quorum_exec_init_fn:274 Using quorum provider corosync_votequorum
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:votequorum_readconfig:967 Reading configuration (runtime: 0)
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:votequorum_read_nodelist_configuration:886 No nodelist defined or our node is not in the nodelist
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [450] vm3 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync vote quorum service v1.0 [5]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on votequorum [5]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:18 [450] vm3 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: votequorum
Nov 13 13:44:18 [450] vm3 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster quorum service v0.1 [3]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on quorum [3]
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Nov 13 13:44:18 [450] vm3 corosync info    [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: quorum
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:905 Receive multicast socket recv buffer size (320000 bytes).
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:911 Transmit multicast socket send buffer size (320000 bytes).
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:917 Local receive multicast loop socket recv buffer size (320000 bytes).
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:923 Local transmit multicast loop socket send buffer size (320000 bytes).
Nov 13 13:44:18 [450] vm3 corosync notice  [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.102.143] is now up.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2087 entering GATHER state from 15(interface change).
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_token_create:3138 Creating commit token because I am the rep.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_save:1550 Saving state aru 0 high seq received 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3383 Storing new sequence id for ring 4
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2135 entering COMMIT state.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2172 entering RECOVERY state.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [0] member 192.168.101.143:
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 0 rep 192.168.101.143
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru 0 high delivered 0 received flag 1
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2326 Did not need to originate any messages in recovery.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4537 Sending initial ORF token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3829 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_reset:1566 Resetting old ring state
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:deliver_messages_from_recovery_to_regular:1772 recovery to regular 1-0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) 
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_operational_enter:2010 entering OPERATIONAL state.
Nov 13 13:44:18 [450] vm3 corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.143:4) was formed. Members joined: -1062705777
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261519]: votes: 1, expected: 3 flags: 8
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync configuration map access
Nov 13 13:44:18 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_sync_activate:386 Single node sync -> no action
Nov 13 13:44:18 [450] vm3 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ; members(old:0 left:0)
Nov 13 13:44:18 [450] vm3 corosync debug   [CPG   ] cpg.c:downlist_log:776 chosen downlist: sender r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ; members(old:0 left:0)
Nov 13 13:44:18 [450] vm3 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync cluster closed process group service v1.01
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261519]: votes: 1, expected: 3 flags: 8
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [450] vm3 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync vote quorum service v1.0
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync notice  [QUORUM] vsf_quorum.c:log_view_list:132 Members[1]: -1062705777
Nov 13 13:44:18 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to (nil), length = 52
Nov 13 13:44:18 [450] vm3 corosync notice  [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 0
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-456-25)
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [456]
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-456-25)
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-456-25) state:2
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-response-451-456-25-header
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-event-451-456-25-header
Nov 13 13:44:18 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-request-451-456-25-header
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2087 entering GATHER state from 9(merge during operational state).
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2087 entering GATHER state from 11(merge during join).
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_save:1550 Saving state aru 6 high seq received 6
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3383 Storing new sequence id for ring c
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2135 entering COMMIT state.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2172 entering RECOVERY state.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2214 TRANS [0] member 192.168.101.143:
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [0] member 192.168.101.141:
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 8 rep 192.168.101.141
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru a high delivered a received flag 1
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [1] member 192.168.101.142:
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 8 rep 192.168.101.141
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru a high delivered a received flag 1
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2218 position [2] member 192.168.101.143:
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2222 previous ring seq 4 rep 192.168.101.143
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2228 aru 6 high delivered 6 received flag 1
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2326 Did not need to originate any messages in recovery.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4484 got commit token
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru ffffffff
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3799 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3810 install seq 0 aru 0 high seq received 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3829 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_reset:1566 Resetting old ring state
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:deliver_messages_from_recovery_to_regular:1772 recovery to regular 1-0
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) 
Nov 13 13:44:18 [450] vm3 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) 
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totemsrp.c:memb_state_operational_enter:2010 entering OPERATIONAL state.
Nov 13 13:44:18 [450] vm3 corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.141:12) was formed. Members joined: -1062705779 -1062705778
Nov 13 13:44:18 [450] vm3 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync configuration map access
Nov 13 13:44:18 [450] vm3 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ; members(old:2 left:0)
Nov 13 13:44:18 [450] vm3 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:2 left:0)
Nov 13 13:44:18 [450] vm3 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ; members(old:1 left:0)
Nov 13 13:44:18 [450] vm3 corosync debug   [CPG   ] cpg.c:downlist_log:776 chosen downlist: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:2 left:0)
Nov 13 13:44:18 [450] vm3 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync cluster closed process group service v1.01
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261518]: votes: 1, expected: 3 flags: 1
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=2, expected_votes=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705778 us: -1062705777
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:are_we_quorate:777 quorum regained, resuming activity
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261519]: votes: 1, expected: 3 flags: 0
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=2, expected_votes=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705778 us: -1062705777
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261517]: votes: 1, expected: 3 flags: 1
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705777
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Nov 13 13:44:18 [450] vm3 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync vote quorum service v1.0
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=3
Nov 13 13:44:18 [450] vm3 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705777
Nov 13 13:44:18 [450] vm3 corosync notice  [QUORUM] vsf_quorum.c:quorum_api_set_quorum:148 This node is within the primary component and will provide service.
Nov 13 13:44:18 [450] vm3 corosync notice  [QUORUM] vsf_quorum.c:log_view_list:132 Members[3]: -1062705779 -1062705778 -1062705777
Nov 13 13:44:18 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to (nil), length = 60
Nov 13 13:44:18 [450] vm3 corosync notice  [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Nov 13 13:44:18 [450] vm3 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 0
Nov 13 13:44:20 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15874
Nov 13 13:44:20 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31455
Nov 13 13:44:20 [460] vm3 pacemakerd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:896   )   debug: main: 	Checking for old instances of pacemakerd
Nov 13 13:44:20 [460] vm3 pacemakerd: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish pacemakerd connection: Connection refused (111)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-460-25)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [460]
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (   cluster.c:526   )   debug: get_cluster_type: 	Testing with Corosync
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfcc75950
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-460-26)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [460]
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-460-26-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-460-26-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-460-26-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (   cluster.c:573   )    info: get_cluster_type: 	Detected an active 'corosync' cluster
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:326   )    info: mcp_read_config: 	Reading configure for stack: corosync
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfcc772d0
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-460-26)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-460-26) state:2
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfcc772d0
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-460-26-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-460-26-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-460-26-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:426   )  notice: mcp_read_config: 	Configured corosync to accept connections from group 492: OK (1)
Nov 13 13:44:20 [460] vm3 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-460-25-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-460-25-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-460-25-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (   logging.c:314   )  notice: crm_add_logfile: 	Additional logging available in /var/log/ha-debug
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:931   )  notice: main: 	Starting Pacemaker 1.1.10 (Build: 2383f6c):  ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:931   )  notice: main: 	Starting Pacemaker 1.1.10 (Build: 2383f6c):  ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:941   )    info: main: 	Maximum core file size is: 18446744073709551615
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:941   )    info: main: 	Maximum core file size is: 18446744073709551615
Nov 13 13:44:20 [460] vm3 pacemakerd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: pacemakerd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: pacemakerd
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-460-25)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-460-25) state:2
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfcc75950
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-460-25-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-460-25-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-460-25-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-460-25)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [460]
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:142   )   debug: cluster_connect_cfg: 	Our nodeid: -1062705777
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:142   )   debug: cluster_connect_cfg: 	Our nodeid: -1062705777
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-460-26)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [460]
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [450] vm3 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7ffdfcd7a430, cpd=0x7ffdfcd7c9a4
Nov 13 13:44:20 [460] vm3 pacemakerd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261519
Nov 13 13:44:20 [460] vm3 pacemakerd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261519
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 786452f5-9344-412f-9259-eaccbe3445f1/0x8e9030 for node (null)/3232261519 (1 total)
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 786452f5-9344-412f-9259-eaccbe3445f1/0x8e9030 for node (null)/3232261519 (1 total)
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:255   )   debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:255   )   debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-460-27)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [460]
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:quorum_lib_init_fn:316 lib_init_fn: conn=0x7ffdfcc75de0
Nov 13 13:44:20 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_gettype:471 got quorum_type request on 0x7ffdfcc75de0
Nov 13 13:44:20 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_getquorate:395 got quorate request on 0x7ffdfcc75de0
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:273   )  notice: cluster_connect_quorum: 	Quorum acquired
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:273   )  notice: cluster_connect_quorum: 	Quorum acquired
Nov 13 13:44:20 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:412 got trackstart request on 0x7ffdfcc75de0
Nov 13 13:44:20 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:420 sending initial status to 0x7ffdfcc75de0
Nov 13 13:44:20 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to 0x7ffdfcc75de0, length = 60
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-460-28)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [460]
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd07f990
Nov 13 13:44:20 [460] vm3 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [460] vm3 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-460-28-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-460-28)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-460-28) state:2
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd07f990
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-460-28-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-460-28-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-460-28-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-460-28)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [460]
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd180440
Nov 13 13:44:20 [460] vm3 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [460] vm3 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:20 [460] vm3 pacemakerd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [460] vm3 pacemakerd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-460-28)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-460-28) state:2
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:20 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd180440
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-460-28-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-460-28-header
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-460-28-header
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process cib
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process cib
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 464 for process cib
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 464 for process cib
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000004000000)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000004000000)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 465 for process stonith-ng
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 465 for process stonith-ng
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 466 for process lrmd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 466 for process lrmd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process attrd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process attrd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 467 for process attrd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 467 for process attrd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Nov 13 13:44:20 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 460
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process pengine
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process pengine
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 468 for process pengine
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 468 for process pengine
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process crmd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:270   )    info: start_child: 	Using uid=496 and group=492 for process crmd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 469 for process crmd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:281   )    info: start_child: 	Forked child 469 for process crmd
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:1023  )    info: main: 	Starting mainloop
Nov 13 13:44:20 [460] vm3 pacemakerd: ( pacemaker.c:1023  )    info: main: 	Starting mainloop
Nov 13 13:44:20 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15881
Nov 13 13:44:20 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15879
Nov 13 13:44:20 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15878
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:191   )    info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:191   )    info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 3f9e83b4-157d-444a-b6c9-aaad497b7f23/0x9eb3d0 for node (null)/3232261517 (2 total)
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 3f9e83b4-157d-444a-b6c9-aaad497b7f23/0x9eb3d0 for node (null)/3232261517 (2 total)
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 [460] vm3 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261517
Nov 13 13:44:20 [460] vm3 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261517
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-460-28)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [460]
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [464] vm3        cib: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 [466] vm3       lrmd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [464] vm3        cib: (      main.c:230   )  notice: main: 	Using new config location: /var/lib/pacemaker/cib
Nov 13 13:44:20 [464] vm3        cib: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [464] vm3        cib: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [464] vm3        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
Nov 13 13:44:20 [464] vm3        cib: (        io.c:262   ) warning: retrieveCib: 	Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml
Nov 13 13:44:20 [464] vm3        cib: (        io.c:380   ) warning: readCibXmlFile: 	Primary configuration corrupt or unusable, trying backups in /var/lib/pacemaker/cib
Nov 13 13:44:20 [464] vm3        cib: (        io.c:412   ) warning: readCibXmlFile: 	Continuing with an empty configuration.
Nov 13 13:44:20 [464] vm3        cib: (       xml.c:2627  )    info: validate_with_relaxng: 	Creating RNG parser context
Nov 13 13:44:20 [466] vm3       lrmd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: lrmd
Nov 13 13:44:20 [466] vm3       lrmd: (      main.c:313   )    info: main: 	Starting
Nov 13 13:44:20 [465] vm3 stonith-ng: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 [467] vm3      attrd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 [467] vm3      attrd: (      main.c:307   )    info: main: 	Starting up
Nov 13 13:44:20 [467] vm3      attrd: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [467] vm3      attrd: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [467] vm3      attrd: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:20 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd180440
Nov 13 13:44:20 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31462
Nov 13 13:44:20 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31460
Nov 13 13:44:20 [465] vm3 stonith-ng: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:20 [465] vm3 stonith-ng: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:20 [465] vm3 stonith-ng: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-467-29)
Nov 13 13:44:20 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [467]
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [468] vm3    pengine: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:21 [468] vm3    pengine: (      main.c:168   )   debug: main: 	Init server comms
Nov 13 13:44:21 [468] vm3    pengine: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: pengine
Nov 13 13:44:21 [468] vm3    pengine: (      main.c:176   )    info: main: 	Starting pengine
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [469] vm3       crmd: (   logging.c:761   )    info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:21 [469] vm3       crmd: (      main.c:97    )  notice: main: 	CRM Git Version: 2383f6c
Nov 13 13:44:21 [469] vm3       crmd: (      main.c:134   )   debug: crmd_init: 	Starting crmd
Nov 13 13:44:21 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
Nov 13 13:44:21 [469] vm3       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Nov 13 13:44:21 [469] vm3       crmd: (   control.c:488   )   debug: do_startup: 	Registering Signal Handlers
Nov 13 13:44:21 [469] vm3       crmd: (   control.c:495   )   debug: do_startup: 	Creating CIB and LRM objects
Nov 13 13:44:21 [469] vm3       crmd: (   cluster.c:536   )    info: get_cluster_type: 	Verifying cluster type: 'corosync'
Nov 13 13:44:21 [469] vm3       crmd: (   cluster.c:573   )    info: get_cluster_type: 	Assuming an active 'corosync' cluster
Nov 13 13:44:21 [469] vm3       crmd: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish cib_shm connection: Connection refused (111)
Nov 13 13:44:21 [469] vm3       crmd: (cib_native.c:229   )   debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Nov 13 13:44:21 [469] vm3       crmd: (cib_native.c:272   )   debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Nov 13 13:44:21 [469] vm3       crmd: (cib_native.c:282   )   debug: cib_native_signoff: 	Signing out of the CIB Service
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [450] vm3 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7ffdfcd77d10, cpd=0x7ffdfcd78564
Nov 13 13:44:21 [460] vm3 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [460] vm3 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-460-28-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-460-28-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-460-28-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-460-28-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-460-28-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-460-28-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:21 [460] vm3 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:21 [460] vm3 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Nov 13 13:44:21 [460] vm3 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 18628256-f88a-4204-8634-d1c2f5f976b2/0x9ea7d0 for node (null)/3232261518 (3 total)
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:399   )    info: crm_get_peer: 	Created entry 18628256-f88a-4204-8634-d1c2f5f976b2/0x9ea7d0 for node (null)/3232261518 (3 total)
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:21 [460] vm3 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261518
Nov 13 13:44:21 [460] vm3 pacemakerd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261518
Nov 13 13:44:21 [467] vm3      attrd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261519
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-460-28)
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-460-28) state:2
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd180440
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-460-28-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-460-28-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-460-28-header
Nov 13 13:44:21 [467] vm3      attrd: (membership.c:399   )    info: crm_get_peer: 	Created entry 00b9766b-a28b-4016-9e84-6ff891d8164a/0x1cce140 for node (null)/3232261519 (1 total)
Nov 13 13:44:21 [467] vm3      attrd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 [467] vm3      attrd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 [467] vm3      attrd: (membership.c:607   )  notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261519] - state is now member (was (null))
Nov 13 13:44:21 [467] vm3      attrd: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:21 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31459
Nov 13 13:44:21 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 467
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-465-28)
Nov 13 13:44:21 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for start op
Nov 13 13:44:21 [464] vm3        cib: (      main.c:586   )    info: startCib: 	CIB Initialization completed successfully
Nov 13 13:44:21 [464] vm3        cib: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [465]
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [450] vm3 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7ffdfcd79890, cpd=0x7ffdfd181964
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-460-30)
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [460]
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd180d80
Nov 13 13:44:21 [465] vm3 stonith-ng: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261519
Nov 13 13:44:21 [460] vm3 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [460] vm3 pacemakerd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-460-30-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-460-30-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-460-30-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-460-30-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-460-30-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-460-30-header
Nov 13 13:44:21 [460] vm3 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:21 [460] vm3 pacemakerd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:21 [460] vm3 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Nov 13 13:44:21 [460] vm3 pacemakerd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm3[3232261519] - state is now member (was (null))
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm3[3232261519] - state is now member (was (null))
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:21 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000000000)
Nov 13 13:44:21 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000000000)
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:21 [460] vm3 pacemakerd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:21 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000000000)
Nov 13 13:44:21 [460] vm3 pacemakerd: ( pacemaker.c:586   )   debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000000000)
Nov 13 13:44:21 [465] vm3 stonith-ng: (membership.c:399   )    info: crm_get_peer: 	Created entry a67a7722-f78f-46cf-b8b4-b989353591b2/0x1b006d0 for node (null)/3232261519 (1 total)
Nov 13 13:44:21 [465] vm3 stonith-ng: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 [465] vm3 stonith-ng: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 [465] vm3 stonith-ng: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-460-30)
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-460-30) state:2
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd180d80
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-460-30-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-460-30-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-460-30-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-464-30)
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [464]
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [450] vm3 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7ffdfd180d80, cpd=0x7ffdfd1814b4
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-467-31)
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [467]
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd080b90
Nov 13 13:44:21 [464] vm3        cib: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261519
Nov 13 13:44:21 [467] vm3      attrd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [467] vm3      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-467-31-header
Nov 13 13:44:21 [467] vm3      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-467-31-header
Nov 13 13:44:21 [467] vm3      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-467-31-header
Nov 13 13:44:21 [467] vm3      attrd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:21 [467] vm3      attrd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:21 [467] vm3      attrd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:21 [467] vm3      attrd: (      main.c:323   )    info: main: 	Cluster connection active
Nov 13 13:44:21 [467] vm3      attrd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: attrd
Nov 13 13:44:21 [467] vm3      attrd: (      main.c:327   )    info: main: 	Accepting attribute updates
Nov 13 13:44:21 [467] vm3      attrd: (      main.c:149   )   debug: attrd_cib_connect: 	CIB signon attempt 1
Nov 13 13:44:21 [467] vm3      attrd: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish cib_rw connection: Connection refused (111)
Nov 13 13:44:21 [467] vm3      attrd: (cib_native.c:229   )   debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Nov 13 13:44:21 [467] vm3      attrd: (cib_native.c:272   )   debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Nov 13 13:44:21 [467] vm3      attrd: (cib_native.c:282   )   debug: cib_native_signoff: 	Signing out of the CIB Service
Nov 13 13:44:21 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 465
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-465-32)
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [465]
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfcd7ce80
Nov 13 13:44:21 [464] vm3        cib: (membership.c:399   )    info: crm_get_peer: 	Created entry 3d5cecc8-ba89-4ad7-8053-78b2ef795126/0x1a6c360 for node (null)/3232261519 (1 total)
Nov 13 13:44:21 [464] vm3        cib: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 [464] vm3        cib: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 [464] vm3        cib: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-467-31)
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-467-31) state:2
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd080b90
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-467-31-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-467-31-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-467-31-header
Nov 13 13:44:21 [465] vm3 stonith-ng: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [465] vm3 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-465-32-header
Nov 13 13:44:21 [465] vm3 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-465-32-header
Nov 13 13:44:21 [465] vm3 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-465-32-header
Nov 13 13:44:21 [465] vm3 stonith-ng: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:21 [465] vm3 stonith-ng: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:21 [465] vm3 stonith-ng: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:21 [465] vm3 stonith-ng: (       ipc.c:781   )    info: crm_ipc_connect: 	Could not establish cib_rw connection: Connection refused (111)
Nov 13 13:44:21 [465] vm3 stonith-ng: (cib_native.c:229   )   debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Nov 13 13:44:21 [465] vm3 stonith-ng: (cib_native.c:272   )   debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Nov 13 13:44:21 [465] vm3 stonith-ng: (cib_native.c:282   )   debug: cib_native_signoff: 	Signing out of the CIB Service
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-465-32)
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-465-32) state:2
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfcd7ce80
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-465-32-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-465-32-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-465-32-header
Nov 13 13:44:21 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 464
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-464-31)
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [464]
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:21 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfcd7ce80
Nov 13 13:44:21 [464] vm3        cib: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:21 [464] vm3        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-464-31-header
Nov 13 13:44:21 [464] vm3        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-464-31-header
Nov 13 13:44:21 [464] vm3        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-464-31-header
Nov 13 13:44:21 [464] vm3        cib: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:21 [464] vm3        cib: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:21 [464] vm3        cib: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:21 [464] vm3        cib: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: cib_ro
Nov 13 13:44:21 [464] vm3        cib: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: cib_rw
Nov 13 13:44:21 [464] vm3        cib: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: cib_shm
Nov 13 13:44:21 [464] vm3        cib: (      main.c:550   )    info: cib_init: 	Starting cib mainloop
Nov 13 13:44:21 [464] vm3        cib: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] cib.3232261519 
Nov 13 13:44:21 [464] vm3        cib: (membership.c:399   )    info: crm_get_peer: 	Created entry da0b5f3f-8aeb-43e5-a5b8-2adea07b94d5/0x1a6ebc0 for node (null)/3232261517 (2 total)
Nov 13 13:44:21 [464] vm3        cib: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:21 [464] vm3        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] cib.3232261517 
Nov 13 13:44:21 [464] vm3        cib: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:21 [464] vm3        cib: (membership.c:399   )    info: crm_get_peer: 	Created entry be1aae7f-b01f-45a8-9001-7da35423f919/0x1a6def0 for node (null)/3232261518 (3 total)
Nov 13 13:44:21 [464] vm3        cib: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:21 [464] vm3        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.1] cib.3232261518 
Nov 13 13:44:21 [464] vm3        cib: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:21 [464] vm3        cib: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.2] cib.3232261519 
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-464-31)
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-464-31) state:2
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:21 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfcd7ce80
Nov 13 13:44:21 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-464-31-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-464-31-header
Nov 13 13:44:21 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-464-31-header
Nov 13 13:44:21 [464] vm3        cib: (     utils.c:1216  )   debug: get_last_sequence: 	Series file /var/lib/pacemaker/cib/cib.last does not exist
Nov 13 13:44:21 [464] vm3        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:44:21 [464] vm3        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.0.0 of the CIB to disk (digest: 5a2fda2a744a4dcae8dfd552c5909754)
Nov 13 13:44:21 [464] vm3        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 5a2fda2a744a4dcae8dfd552c5909754 to disk
Nov 13 13:44:21 [464] vm3        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.bTQsPv (digest: /var/lib/pacemaker/cib/cib.ZGUMyJ)
Nov 13 13:44:21 [464] vm3        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.bTQsPv
Nov 13 13:44:21 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 15883
Nov 13 13:44:21 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 31464
Nov 13 13:44:22 [464] vm3        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1a6f340 for uid=496 gid=492 pid=469 id=4c21d8b3-8500-4e75-a428-bf457cc36a6c
Nov 13 13:44:22 [464] vm3        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (464-469-10)
Nov 13 13:44:22 [464] vm3        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [469]
Nov 13 13:44:22 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [469] vm3       crmd: (cib_native.c:268   )   debug: cib_native_signon_raw: 	Connection to CIB successful
Nov 13 13:44:22 [464] vm3        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_refresh_notify callbacks for crmd (4c21d8b3-8500-4e75-a428-bf457cc36a6c): on
Nov 13 13:44:22 [464] vm3        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crmd (4c21d8b3-8500-4e75-a428-bf457cc36a6c): on
Nov 13 13:44:22 [469] vm3       crmd: (       cib.c:215   )    info: do_cib_control: 	CIB connection established
Nov 13 13:44:22 [469] vm3       crmd: (   cluster.c:179   )  notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Nov 13 13:44:22 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-469-31)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [469]
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [450] vm3 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7ffdfd080b60, cpd=0x7ffdfd0810f4
Nov 13 13:44:22 [469] vm3       crmd: (       cpg.c:110   )   debug: get_local_nodeid: 	Local nodeid is 3232261519
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:399   )    info: crm_get_peer: 	Created entry 1b84e6c1-a836-47c9-bdcb-98c835f079db/0x275fc60 for node (null)/3232261519 (1 total)
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:550   )    info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:345   )    info: init_cs_connection_once: 	Connection to 'corosync': established
Nov 13 13:44:22 [450] vm3 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 469
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-469-32)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [469]
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfcd7da30
Nov 13 13:44:22 [469] vm3       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-469-32-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-469-32-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-469-32-header
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:22 [469] vm3       crmd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261519 is now known as vm3
Nov 13 13:44:22 [469] vm3       crmd: ( callbacks.c:118   )    info: peer_update_callback: 	vm3 is now (null)
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:255   )   debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-469-32)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-469-32) state:2
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfcd7da30
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-469-32-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-469-32-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-469-32-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-469-32)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [469]
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:quorum_lib_init_fn:316 lib_init_fn: conn=0x7ffdfcd7d880
Nov 13 13:44:22 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_gettype:471 got quorum_type request on 0x7ffdfcd7d880
Nov 13 13:44:22 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_getquorate:395 got quorate request on 0x7ffdfcd7d880
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:273   )  notice: cluster_connect_quorum: 	Quorum acquired
Nov 13 13:44:22 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:412 got trackstart request on 0x7ffdfcd7d880
Nov 13 13:44:22 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:420 sending initial status to 0x7ffdfcd7d880
Nov 13 13:44:22 [450] vm3 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to 0x7ffdfcd7d880, length = 60
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-469-33)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [469]
Nov 13 13:44:22 [467] vm3      attrd: (      main.c:149   )   debug: attrd_cib_connect: 	CIB signon attempt 2
Nov 13 13:44:22 [464] vm3        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1af22b0 for uid=496 gid=492 pid=467 id=15b61ae7-62d1-4555-b4ac-84143ce0eda3
Nov 13 13:44:22 [464] vm3        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (464-467-11)
Nov 13 13:44:22 [464] vm3        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [467]
Nov 13 13:44:22 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [467] vm3      attrd: (cib_native.c:268   )   debug: cib_native_signon_raw: 	Connection to CIB successful
Nov 13 13:44:22 [467] vm3      attrd: (      main.c:159   )    info: attrd_cib_connect: 	Connected to the CIB after 2 attempts
Nov 13 13:44:22 [464] vm3        cib: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1af3640 for uid=0 gid=0 pid=465 id=3b3586a2-45ff-4ba6-8242-af9114a2a8e7
Nov 13 13:44:22 [464] vm3        cib: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (464-465-12)
Nov 13 13:44:22 [464] vm3        cib: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [465]
Nov 13 13:44:22 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [464] vm3        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_refresh_notify callbacks for attrd (15b61ae7-62d1-4555-b4ac-84143ce0eda3): on
Nov 13 13:44:22 [467] vm3      attrd: (      main.c:335   )    info: main: 	CIB connection active
Nov 13 13:44:22 [467] vm3      attrd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] attrd.3232261519 
Nov 13 13:44:22 [467] vm3      attrd: (membership.c:399   )    info: crm_get_peer: 	Created entry aeb1dc66-e6a9-4bb6-85bf-03cffd5517e5/0x1cd3f10 for node (null)/3232261517 (2 total)
Nov 13 13:44:22 [467] vm3      attrd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:22 [467] vm3      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] attrd.3232261517 
Nov 13 13:44:22 [467] vm3      attrd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:22 [467] vm3      attrd: (membership.c:607   )  notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:22 [467] vm3      attrd: (membership.c:399   )    info: crm_get_peer: 	Created entry bcf3b23c-c185-4ca8-90aa-68b1949e44ea/0x1cd1fa0 for node (null)/3232261518 (3 total)
Nov 13 13:44:22 [467] vm3      attrd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:22 [467] vm3      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.1] attrd.3232261518 
Nov 13 13:44:22 [467] vm3      attrd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:22 [467] vm3      attrd: (membership.c:607   )  notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:22 [467] vm3      attrd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.2] attrd.3232261519 
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd0818b0
Nov 13 13:44:22 [469] vm3       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-469-33-header
Nov 13 13:44:22 [465] vm3 stonith-ng: (cib_native.c:268   )   debug: cib_native_signon_raw: 	Connection to CIB successful
Nov 13 13:44:22 [464] vm3        cib: ( callbacks.c:189   )   debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crmd (3b3586a2-45ff-4ba6-8242-af9114a2a8e7): on
Nov 13 13:44:22 [465] vm3 stonith-ng: (      main.c:978   )  notice: setup_cib: 	Watching for stonith topology changes
Nov 13 13:44:22 [465] vm3 stonith-ng: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: stonith-ng
Nov 13 13:44:22 [465] vm3 stonith-ng: (      main.c:1208  )    info: main: 	Starting stonith-ng mainloop
Nov 13 13:44:22 [465] vm3 stonith-ng: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] stonith-ng.3232261519 
Nov 13 13:44:22 [465] vm3 stonith-ng: (membership.c:399   )    info: crm_get_peer: 	Created entry 13cad9fd-c8f5-4e86-9b27-1cbcc9db7a6f/0x1b05680 for node (null)/3232261517 (2 total)
Nov 13 13:44:22 [465] vm3 stonith-ng: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:22 [465] vm3 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] stonith-ng.3232261517 
Nov 13 13:44:22 [465] vm3 stonith-ng: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:22 [465] vm3 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261517
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-469-33)
Nov 13 13:44:22 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-469-33) state:2
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd0818b0
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-469-33)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [469]
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd0818b0
Nov 13 13:44:22 [469] vm3       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (   control.c:146   )    info: do_ha_control: 	Connected to the cluster
Nov 13 13:44:22 [469] vm3       crmd: (       lrm.c:299   )   debug: do_lrm_control: 	Connecting to the LRM
Nov 13 13:44:22 [469] vm3       crmd: (lrmd_client.:938   )    info: lrmd_ipc_connect: 	Connecting to lrmd
Nov 13 13:44:22 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.0.0)
Nov 13 13:44:22 [466] vm3       lrmd: (      main.c:89    )   trace: lrmd_ipc_accept: 	Connection 0xfcac10
Nov 13 13:44:22 [466] vm3       lrmd: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0xfcac10 for uid=496 gid=492 pid=469 id=d1a560c5-bbbd-475e-a3cb-6fca0227063d
Nov 13 13:44:22 [466] vm3       lrmd: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (466-469-6)
Nov 13 13:44:22 [466] vm3       lrmd: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [469]
Nov 13 13:44:22 [466] vm3       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:22 [466] vm3       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:22 [466] vm3       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-465-34)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [465]
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [466] vm3       lrmd: (      main.c:99    )   trace: lrmd_ipc_created: 	Connection 0xfcac10
Nov 13 13:44:22 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 6
Nov 13 13:44:22 [469] vm3       crmd: (       lrm.c:321   )    info: do_lrm_control: 	LRM connection established
Nov 13 13:44:22 [469] vm3       crmd: (   control.c:768   )    info: do_started: 	Delaying start, no membership data (0000000000100000)
Nov 13 13:44:22 [469] vm3       crmd: (  messages.c:90    )   debug: register_fsa_input_adv: 	Stalling the FSA pending further input: source=do_started cause=C_FSA_INTERNAL data=(nil) queue=0
Nov 13 13:44:22 [469] vm3       crmd: (       fsa.c:240   )   debug: s_crmd_fsa: 	Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
Nov 13 13:44:22 [469] vm3       crmd: (      main.c:142   )   trace: crmd_init: 	Starting crmd's mainloop
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:191   )    info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:399   )    info: crm_get_peer: 	Created entry a2f5c386-a159-43d7-a313-c73e5ec261c8/0x28a6dc0 for node (null)/3232261517 (2 total)
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261517
Nov 13 13:44:22 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.0.0)
Nov 13 13:44:22 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed register operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd082900
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-469-33)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-469-33) state:2
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd0818b0
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-469-33-header
Nov 13 13:44:22 [465] vm3 stonith-ng: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:22 [465] vm3 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-465-34-header
Nov 13 13:44:22 [465] vm3 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-465-34-header
Nov 13 13:44:22 [465] vm3 stonith-ng: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-465-34-header
Nov 13 13:44:22 [465] vm3 stonith-ng: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:22 [465] vm3 stonith-ng: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:22 [465] vm3 stonith-ng: (membership.c:399   )    info: crm_get_peer: 	Created entry 09b9096c-6acc-48a3-9a8b-ae3b3f5f00da/0x1b041c0 for node (null)/3232261518 (3 total)
Nov 13 13:44:22 [465] vm3 stonith-ng: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:22 [465] vm3 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.1] stonith-ng.3232261518 
Nov 13 13:44:22 [465] vm3 stonith-ng: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:22 [465] vm3 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261518
Nov 13 13:44:22 [465] vm3 stonith-ng: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.2] stonith-ng.3232261519 
Nov 13 13:44:22 [465] vm3 stonith-ng: (      main.c:878   )    info: init_cib_cache_cb: 	Updating device list from the cib: init
Nov 13 13:44:22 [465] vm3 stonith-ng: (      main.c:568   )   trace: fencing_topology_init: 	Pushing in stonith topology
Nov 13 13:44:22 [465] vm3 stonith-ng: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:44:22 [465] vm3 stonith-ng: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:44:22 [465] vm3 stonith-ng: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:44:22 [465] vm3 stonith-ng: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:44:22 [465] vm3 stonith-ng: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:44:22 [465] vm3 stonith-ng: (    unpack.c:155   )   debug: unpack_config: 	On loss of CCM Quorum: Stop ALL resources
Nov 13 13:44:22 [465] vm3 stonith-ng: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:44:22 [465] vm3 stonith-ng: (    unpack.c:486   )    info: unpack_nodes: 	Creating a fake local node
Nov 13 13:44:22 [465] vm3 stonith-ng: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-465-34)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-465-34) state:2
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd082900
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-465-34-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-465-34-header
Nov 13 13:44:22 [465] vm3 stonith-ng: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:22 [465] vm3 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261517
Nov 13 13:44:22 [465] vm3 stonith-ng: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:22 [465] vm3 stonith-ng: (      main.c:1003  )   debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261518
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-465-34-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-469-33)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [469]
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd0818b0
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:399   )    info: crm_get_peer: 	Created entry de18ede3-e03b-4761-b647-647f7cbe6228/0x28a48f0 for node (null)/3232261518 (3 total)
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:432   )    info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:214   )    info: pcmk_quorum_notification: 	Obtaining name for new node 3232261518
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-469-33)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-469-33) state:2
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd0818b0
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-469-33)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [469]
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd0818b0
Nov 13 13:44:22 [469] vm3       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:210   )   debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:607   )  notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm3[3232261519] - state is now member (was (null))
Nov 13 13:44:22 [469] vm3       crmd: ( callbacks.c:124   )    info: peer_update_callback: 	vm3 is now member (was (null))
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:81    )   debug: post_cache_update: 	Updated cache after membership event 12.
Nov 13 13:44:22 [469] vm3       crmd: (membership.c:95    )   debug: post_cache_update: 	post_cache_update added action A_ELECTION_CHECK to the FSA
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-469-33)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-469-33) state:2
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd0818b0
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-469-33)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [469]
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd0822f0
Nov 13 13:44:22 [469] vm3       crmd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-469-33-header
Nov 13 13:44:22 [469] vm3       crmd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:22 [469] vm3       crmd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:22 [469] vm3       crmd: (   control.c:786   )    info: do_started: 	Delaying start, Config not read (0000000000000040)
Nov 13 13:44:22 [469] vm3       crmd: (  messages.c:90    )   debug: register_fsa_input_adv: 	Stalling the FSA pending further input: source=do_started cause=C_FSA_INTERNAL data=(nil) queue=0
Nov 13 13:44:22 [469] vm3       crmd: (       fsa.c:240   )   debug: s_crmd_fsa: 	Exiting the FSA: queue=0, fsa_actions=0x200000002, stalled=true
Nov 13 13:44:22 [469] vm3       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 4 : Parsing CIB options
Nov 13 13:44:22 [469] vm3       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:44:22 [469] vm3       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:44:22 [469] vm3       crmd: (   control.c:812   )   debug: do_started: 	Init server comms
Nov 13 13:44:22 [469] vm3       crmd: ( ipc_setup.c:377   )    info: qb_ipcs_us_publish: 	server name: crmd
Nov 13 13:44:22 [469] vm3       crmd: (   control.c:827   )  notice: do_started: 	The local CRM is operational
Nov 13 13:44:22 [469] vm3       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:44:22 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
Nov 13 13:44:22 [469] vm3       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_started() received in state S_STARTING
Nov 13 13:44:22 [469] vm3       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Nov 13 13:44:22 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.0.0)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-469-33)
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-469-33) state:2
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd0822f0
Nov 13 13:44:22 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-469-33-header
Nov 13 13:44:22 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-469-33-header
Nov 13 13:44:23 [469] vm3       crmd: (join_client.:46    )   debug: do_cl_join_query: 	Querying for a DC
Nov 13 13:44:23 [469] vm3       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=17
Nov 13 13:44:23 [469] vm3       crmd: (       cpg.c:378   )    info: pcmk_cpg_membership: 	Joined[0.0] crmd.3232261519 
Nov 13 13:44:23 [469] vm3       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.0] crmd.3232261517 
Nov 13 13:44:23 [469] vm3       crmd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:23 [469] vm3       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.1] crmd.3232261518 
Nov 13 13:44:23 [469] vm3       crmd: (membership.c:550   )    info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:23 [469] vm3       crmd: (       cpg.c:384   )    info: pcmk_cpg_membership: 	Member[0.2] crmd.3232261519 
Nov 13 13:44:23 [469] vm3       crmd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:23 [469] vm3       crmd: ( callbacks.c:118   )    info: peer_update_callback: 	vm2 is now member
Nov 13 13:44:23 [469] vm3       crmd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:23 [469] vm3       crmd: ( callbacks.c:118   )    info: peer_update_callback: 	vm1 is now member
Nov 13 13:44:23 [469] vm3       crmd: (  te_utils.c:248   )   debug: te_connect_stonith: 	Attempting connection to fencing daemon...
Nov 13 13:44:24 [465] vm3 stonith-ng: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1b0bc30 for uid=496 gid=492 pid=469 id=f2a5be16-b954-4398-be77-60a78e6c70a6
Nov 13 13:44:24 [465] vm3 stonith-ng: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (465-469-9)
Nov 13 13:44:24 [465] vm3 stonith-ng: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [469]
Nov 13 13:44:24 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:24 [465] vm3 stonith-ng: (      main.c:87    )   trace: st_ipc_created: 	Connection created for 0x1b0bc30
Nov 13 13:44:24 [465] vm3 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 9 from crmd.469
Nov 13 13:44:24 [465] vm3 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command t="stonith-ng" st_op="register" st_clientname="crmd.469" st_clientid="f2a5be16-b954-4398-be77-60a78e6c70a6" st_clientnode="vm3"/>
Nov 13 13:44:24 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing register 9 from crmd.469 (               0)
Nov 13 13:44:24 [469] vm3       crmd: ( st_client.c:1639  )   debug: stonith_api_signon: 	Connection to STONITH successful
Nov 13 13:44:24 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed register from crmd.469: OK (0)
Nov 13 13:44:24 [465] vm3 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 10 from crmd.469
Nov 13 13:44:24 [465] vm3 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_set_notification st_op="st_notify" st_notify_activate="st_notify_disconnect" st_clientid="f2a5be16-b954-4398-be77-60a78e6c70a6" st_clientname="crmd.469" st_clientnode="vm3"/>
Nov 13 13:44:24 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify 10 from crmd.469 (               0)
Nov 13 13:44:24 [465] vm3 stonith-ng: (  commands.c:1822  )   debug: handle_request: 	Setting st_notify_disconnect callbacks for crmd.469 (f2a5be16-b954-4398-be77-60a78e6c70a6): ON
Nov 13 13:44:24 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify from crmd.469: OK (0)
Nov 13 13:44:24 [465] vm3 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 11 from crmd.469
Nov 13 13:44:24 [465] vm3 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_set_notification st_op="st_notify" st_notify_activate="st_notify_fence" st_clientid="f2a5be16-b954-4398-be77-60a78e6c70a6" st_clientname="crmd.469" st_clientnode="vm3"/>
Nov 13 13:44:24 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify 11 from crmd.469 (               0)
Nov 13 13:44:24 [465] vm3 stonith-ng: (  commands.c:1822  )   debug: handle_request: 	Setting st_notify_fence callbacks for crmd.469 (f2a5be16-b954-4398-be77-60a78e6c70a6): ON
Nov 13 13:44:24 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify from crmd.469: OK (0)
Nov 13 13:44:42 [469] vm3       crmd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:42 [469] vm3       crmd: (  election.c:172   )   debug: crm_uptime: 	Current CPU usage is: 0s, 24996us
Nov 13 13:44:42 [469] vm3       crmd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.24996 vs 0.31995 (usec)
Nov 13 13:44:42 [469] vm3       crmd: (  election.c:511   )    info: election_count_vote: 	Election 1 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:44:42 [469] vm3       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:44:42 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:44:42 [469] vm3       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Nov 13 13:44:42 [469] vm3       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=19
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-464-33)
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [464]
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [464] vm3        cib: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:43 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd0822f0
Nov 13 13:44:43 [464] vm3        cib: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:43 [464] vm3        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-464-33-header
Nov 13 13:44:43 [464] vm3        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-464-33-header
Nov 13 13:44:43 [464] vm3        cib: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-464-33-header
Nov 13 13:44:43 [464] vm3        cib: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:43 [464] vm3        cib: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:43 [464] vm3        cib: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.0.0
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.0.1 335eff11d8e47ed96126ba44f4ec45e7
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="0"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="0" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8"/>
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-464-33)
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-464-33) state:2
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:43 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:43 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd0822f0
Nov 13 13:44:43 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-464-33-header
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-464-33-header
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-464-33-header
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section cib: OK (rc=0, origin=vm1/crmd/7, version=0.0.1)
Nov 13 13:44:43 [469] vm3       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-1
Nov 13 13:44:43 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:981   )    info: update_dc: 	Set DC to vm1 (3.0.8)
Nov 13 13:44:43 [469] vm3       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:44:43 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.1.1
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="0" admin_epoch="0" num_updates="1"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="1" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </cluster_property_set>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/9, version=0.1.1)
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.1.1)
Nov 13 13:44:43 [469] vm3       crmd: (join_client.:157   )   debug: join_query_callback: 	Respond to join offer join-1
Nov 13 13:44:43 [469] vm3       crmd: (join_client.:158   )   debug: join_query_callback: 	Acknowledging vm1 as our DC
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/7, version=0.1.1)
Nov 13 13:44:43 [469] vm3       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 7 : Parsing CIB options
Nov 13 13:44:43 [469] vm3       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:44:43 [469] vm3       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:44:43 [469] vm3       crmd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.24996 vs 0.31995 (usec)
Nov 13 13:44:43 [469] vm3       crmd: (  election.c:511   )    info: election_count_vote: 	Election 2 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:984   )    info: update_dc: 	Unset DC. Was vm1
Nov 13 13:44:43 [469] vm3       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:44:43 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:44:43 [469] vm3       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=22
Nov 13 13:44:43 [469] vm3       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-2
Nov 13 13:44:43 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:981   )    info: update_dc: 	Set DC to vm1 (3.0.8)
Nov 13 13:44:43 [469] vm3       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:44:43 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.2.1
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="1" admin_epoch="0" num_updates="1"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="2" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/11, version=0.2.1)
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/8, version=0.2.1)
Nov 13 13:44:43 [469] vm3       crmd: (join_client.:157   )   debug: join_query_callback: 	Respond to join offer join-2
Nov 13 13:44:43 [469] vm3       crmd: (join_client.:158   )   debug: join_query_callback: 	Acknowledging vm1 as our DC
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.2.1)
Nov 13 13:44:43 [469] vm3       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 9 : Parsing CIB options
Nov 13 13:44:43 [469] vm3       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:44:43 [469] vm3       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:44:43 [469] vm3       crmd: (  messages.c:733   )   debug: handle_request: 	Raising I_JOIN_RESULT: join-2
Nov 13 13:44:43 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:44:43 [469] vm3       crmd: (join_client.:231   )   debug: do_cl_join_finalize_respond: 	Confirming join join-2: join_ack_nack
Nov 13 13:44:43 [469] vm3       crmd: (join_client.:240   )   debug: do_cl_join_finalize_respond: 	join-2: Join complete.  Sending local LRM status to vm1
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:1011  )    info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm3']/transient_attributes
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:1032  )    info: update_attrd_helper: 	Connecting to attrd... 5 retries remaining
Nov 13 13:44:43 [467] vm3      attrd: (      main.c:186   )   trace: attrd_ipc_accept: 	Connection 0x1cd1010
Nov 13 13:44:43 [467] vm3      attrd: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1cd1010 for uid=496 gid=492 pid=469 id=45bf9972-26c8-429f-bfaa-3b8d09a6e952
Nov 13 13:44:43 [467] vm3      attrd: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (467-469-9)
Nov 13 13:44:43 [467] vm3      attrd: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [469]
Nov 13 13:44:43 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [464] vm3        cib: (   cib_ops.c:222   )    info: cib_process_replace: 	Digest matched on replace from vm1: 360fde11e7cf93696f974eea17cffd9b
Nov 13 13:44:43 [464] vm3        cib: (   cib_ops.c:258   )    info: cib_process_replace: 	Replaced 0.2.1 with 0.2.1 from vm1
Nov 13 13:44:43 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_replace op
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_replace operation for section 'all': OK (rc=0, origin=vm1/crmd/17, version=0.2.1)
Nov 13 13:44:43 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [469] vm3       crmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: terminate=(null) for vm3
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: shutdown=(null) for vm3
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: (null)=(null) for localhost
Nov 13 13:44:43 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 13 13:44:43 [469] vm3       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Nov 13 13:44:43 [469] vm3       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 13 13:44:43 [467] vm3      attrd: (      main.c:201   )   trace: attrd_ipc_created: 	Connection 0x1cd1010
Nov 13 13:44:43 [467] vm3      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 469 (0x1cd1010)
Nov 13 13:44:43 [467] vm3      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="terminate" attr_section="status" attr_host="vm3" attr_is_remote="0"/>
Nov 13 13:44:43 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.3.1
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="2" admin_epoch="0" num_updates="1"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="3" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <nodes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <node id="3232261519" uname="vm3"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </nodes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:227   )    info: attrd_client_message: 	Starting an election to determine the writer
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:172   )   debug: crm_uptime: 	Current CPU usage is: 0s, 14997us
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ipc_setup.c:handle_new_connection:484 IPC credentials authenticated (451-467-33)
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [467]
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/18, version=0.3.1)
Nov 13 13:44:43 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.4.1
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="3" admin_epoch="0" num_updates="1"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="4" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <nodes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <node id="3232261517" uname="vm1"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </nodes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/19, version=0.4.1)
Nov 13 13:44:43 [464] vm3        cib: (     utils.c:1216  )   debug: get_last_sequence: 	Series file /var/lib/pacemaker/cib/cib.last does not exist
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_delete operation for section //node_state[@uname='vm3']/transient_attributes to master (origin=local/crmd/10)
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [464] vm3        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-0.raw
Nov 13 13:44:43 [464] vm3        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:44:43 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [467] vm3      attrd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Nov 13 13:44:43 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Nov 13 13:44:43 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7ffdfd083900
Nov 13 13:44:43 [467] vm3      attrd: (      ipcc.c:378   )   debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Nov 13 13:44:43 [467] vm3      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-451-467-33-header
Nov 13 13:44:43 [467] vm3      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-451-467-33-header
Nov 13 13:44:43 [467] vm3      attrd: (ringbuffer.c:302   )   debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-451-467-33-header
Nov 13 13:44:43 [467] vm3      attrd: (  corosync.c:134   )  notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Nov 13 13:44:43 [467] vm3      attrd: (   cluster.c:338   )  notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:242   )   debug: election_vote: 	Started election 1
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting terminate[vm3] = (null)
Nov 13 13:44:43 [467] vm3      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 469 (0x1cd1010)
Nov 13 13:44:43 [467] vm3      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="shutdown" attr_section="status" attr_host="vm3" attr_is_remote="0"/>
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting shutdown[vm3] = (null)
Nov 13 13:44:43 [467] vm3      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 469 (0x1cd1010)
Nov 13 13:44:43 [467] vm3      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="refresh" attr_section="status" attr_is_remote="0"/>
Nov 13 13:44:43 [464] vm3        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.1.0 of the CIB to disk (digest: b8fe3a8159b940d26780cf9ea797cc0e)
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (451-467-33)
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(451-467-33) state:2
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Nov 13 13:44:43 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Nov 13 13:44:43 [450] vm3 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7ffdfd083900
Nov 13 13:44:43 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.5.1
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="4" admin_epoch="0" num_updates="1"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <nodes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <node id="3232261518" uname="vm2"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </nodes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [464] vm3        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest b8fe3a8159b940d26780cf9ea797cc0e to disk
Nov 13 13:44:43 [464] vm3        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.lZ8hFu (digest: /var/lib/pacemaker/cib/cib.gsdjuG)
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/20, version=0.5.1)
Nov 13 13:44:43 [464] vm3        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.lZ8hFu
Nov 13 13:44:43 [450] vm3 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-451-467-33-header
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-451-467-33-header
Nov 13 13:44:43 [450] vm3 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-451-467-33-header
Nov 13 13:44:43 [467] vm3      attrd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261517 is now known as vm1
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:352   )   debug: election_count_vote: 	Created voted hash
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.14997 vs 0.16997 (usec)
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:511   )    info: election_count_vote: 	Election 1 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute terminate with no delay
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting terminate[vm1] to (null) from vm1
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm1 will write out terminate, we are in state 2
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.1
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.2 96afa5ea158751708b6aaa2afbd9266e
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="1"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="2" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261519">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </node_state>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute shutdown with no delay
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting shutdown[vm1] to (null) from vm1
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm1 will write out shutdown, we are in state 2
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:485   )   debug: election_count_vote: 	Election 1 (current: 1, owner: 3232261519): Processed vote from vm3 (Recorded)
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/23, version=0.5.2)
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:998   )   debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm3']/transient_attributes": OK (rc=0)
Nov 13 13:44:43 [464] vm3        cib: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting terminate[vm3] to (null) from vm3
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm3 will write out terminate, we are in state 2
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting shutdown[vm3] to (null) from vm3
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm3 will write out shutdown, we are in state 2
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.14997 vs 0.16997 (usec)
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:511   )    info: election_count_vote: 	Election 2 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.2
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.3 dd02c1675f04ba6ab7d94c1f96067ad9
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="2"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="3" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261517">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </node_state>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/25, version=0.5.3)
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.3
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.4 f4a55aa279b990bb05b0a588767e25f0
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="3"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="4" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261518">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </node_state>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/27, version=0.5.4)
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section cib: OK (rc=0, origin=vm1/crmd/30, version=0.5.5)
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.4
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.5 3118925a5f456f332e09aade04800ea0
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="4"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="5" num_updates="5" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:44:43 [467] vm3      attrd: (membership.c:404   )    info: crm_get_peer: 	Node 3232261518 is now known as vm2
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.14997 vs 0.20996 (usec)
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:511   )    info: election_count_vote: 	Election 1 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting terminate[vm2] to (null) from vm2
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out terminate, we are in state 2
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting shutdown[vm2] to (null) from vm2
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out shutdown, we are in state 2
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.14997 vs 0.20996 (usec)
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:511   )    info: election_count_vote: 	Election 2 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.14997 vs 0.20996 (usec)
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:511   )    info: election_count_vote: 	Election 3 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.14997 vs 0.20996 (usec)
Nov 13 13:44:43 [467] vm3      attrd: (  election.c:511   )    info: election_count_vote: 	Election 4 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 [464] vm3        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-1.raw
Nov 13 13:44:43 [464] vm3        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:44:43 [464] vm3        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.5.0 of the CIB to disk (digest: 630d79f602055b52fd2ea79fdbd1baf8)
Nov 13 13:44:43 [469] vm3       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: probe_complete=true for vm3
Nov 13 13:44:43 [467] vm3      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 469 (0x1cd1010)
Nov 13 13:44:43 [467] vm3      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="probe_complete" attr_value="true" attr_section="status" attr_host="vm3" attr_is_remote="0"/>
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting probe_complete[vm3] = true
Nov 13 13:44:43 [464] vm3        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 630d79f602055b52fd2ea79fdbd1baf8 to disk
Nov 13 13:44:43 [464] vm3        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.wdhXOy (digest: /var/lib/pacemaker/cib/cib.VixlNK)
Nov 13 13:44:43 [464] vm3        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.wdhXOy
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:307   )  notice: attrd_peer_message: 	Processing sync-response from vm2
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged shutdown[vm1] from vm2 is (null)
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm1's node id now: 3232261517
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged shutdown[vm2] from vm2 is (null)
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm2's node id now: 3232261518
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged shutdown[vm3] from vm2 is (null)
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm3's node id now: 3232261519
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged terminate[vm1] from vm2 is (null)
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm1's node id now: 3232261517
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged terminate[vm2] from vm2 is (null)
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm2's node id now: 3232261518
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged terminate[vm3] from vm2 is (null)
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:433   )   trace: attrd_peer_update: 	We know vm3's node id now: 3232261519
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute probe_complete with no delay
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting probe_complete[vm3] to true from vm3
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out probe_complete, we are in state 2
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting probe_complete[vm1] to true from vm1
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out probe_complete, we are in state 2
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/2, version=0.5.6)
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.5
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.6 1eda82bbbd77cf8a880d3d455765d8d6
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="5"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="6" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <transient_attributes id="3232261519">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="status-3232261519"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </transient_attributes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <transient_attributes id="3232261517">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="status-3232261517"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </transient_attributes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <transient_attributes id="3232261518">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="status-3232261518"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </transient_attributes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting probe_complete[vm2] to true from vm2
Nov 13 13:44:43 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out probe_complete, we are in state 2
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/4, version=0.5.7)
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.6
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.7 ff617ff8b610f67d2056a9b012bdfc03
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="6"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="7" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261519">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261519">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261519-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:43 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/5, version=0.5.8)
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.7
Nov 13 13:44:43 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.5.8 f2fe37326dd3c20276f6447b1667415b
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="7"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="5" num_updates="8" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:44:43 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261517">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261517">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261517-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261518">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261518">
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261518-probe_complete" name="probe_complete" value="true"/>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:44:43 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:44:51 [469] vm3       crmd: (  throttle.c:651   )   debug: throttle_update: 	Host vm1 supports a maximum of 2 jobs and throttle mode 0000.  New job limit is 2
Nov 13 13:44:51 [469] vm3       crmd: (  throttle.c:651   )   debug: throttle_update: 	Host vm2 supports a maximum of 2 jobs and throttle mode 0000.  New job limit is 2
Nov 13 13:44:52 [469] vm3       crmd: (  throttle.c:259   )   debug: throttle_cib_load: 	Init 5 + 5 ticks at 1384317892 (100 tps)
Nov 13 13:44:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.010000 (full: 0.01 0.00 0.00 1/108 479)
Nov 13 13:44:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:44:52 [469] vm3       crmd: (  throttle.c:520   )   debug: throttle_timer_cb: 	New throttle mode: 0000 (was 0000)
Nov 13 13:44:52 [469] vm3       crmd: (  throttle.c:499   )    info: throttle_send_command: 	Updated throttle state to 0000
Nov 13 13:44:52 [469] vm3       crmd: (  throttle.c:651   )   debug: throttle_update: 	Host vm3 supports a maximum of 2 jobs and throttle mode 0000.  New job limit is 2
Nov 13 13:45:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:45:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 480)
Nov 13 13:45:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:45:33 [469] vm3       crmd: (  election.c:172   )   debug: crm_uptime: 	Current CPU usage is: 0s, 31995us
Nov 13 13:45:33 [469] vm3       crmd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.31995 vs 0.53991 (usec)
Nov 13 13:45:33 [469] vm3       crmd: (  election.c:511   )    info: election_count_vote: 	Election 3 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:45:33 [469] vm3       crmd: (     utils.c:984   )    info: update_dc: 	Unset DC. Was vm1
Nov 13 13:45:33 [469] vm3       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:45:33 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:45:33 [469] vm3       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Nov 13 13:45:33 [469] vm3       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/11, version=0.5.8)
Nov 13 13:45:33 [469] vm3       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=26
Nov 13 13:45:33 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.5.8
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.6.1 e65af88559035840dce69eaec2069fba
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib epoch="5" num_updates="8" admin_epoch="0">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <configuration>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <crm_config>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        </cluster_property_set>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </crm_config>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </configuration>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="6" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="cibadmin" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="no-quorum-policy" value="freeze" id="cib-bootstrap-options-no-quorum-policy"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="startup-fencing" value="false" id="cib-bootstrap-options-startup-fencing"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="stonith-timeout" value="60s" id="cib-bootstrap-options-stonith-timeout"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="crmd-transition-delay" value="2s" id="cib-bootstrap-options-crmd-transition-delay"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <resources>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <primitive id="F1" class="stonith" type="external/libvirt">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <instance_attributes id="F1-instance_attributes">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair name="hostlist" value="vm3" id="F1-instance_attributes-hostlist"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair name="hypervisor_uri" value="qemu+ssh://bl460g1n6/system" id="F1-instance_attributes-hypervisor_uri"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </instance_attributes>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <operations>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="start" interval="0s" timeout="60s" id="F1-start-0s"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="monitor" interval="3600s" timeout="60s" id="F1-monitor-3600s"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="stop" interval="0s" timeout="60s" id="F1-stop-0s"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </operations>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </primitive>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <primitive id="pDummy" class="ocf" provider="pacemaker" type="Dummy">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <operations>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <op name="monitor" interval="10s" timeout="300s" on-fail="fence" id="pDummy-monitor-10s"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </operations>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </primitive>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </resources>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <constraints>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <rsc_location id="l1" rsc="pDummy">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="100" id="l1-rule">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </rsc_location>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <rsc_location id="l2" rsc="F1">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="100" id="l2-rule">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="100" id="l2-rule-0">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm2" id="l2-expression-0"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <rule score="-INFINITY" id="l2-rule-1">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression-1"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         </rule>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </rsc_location>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </constraints>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <fencing-topology>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <fencing-level target="vm3" devices="F1" index="1" id="fencing"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </fencing-topology>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     <rsc_defaults>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <meta_attributes id="rsc-options">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair name="migration-threshold" value="1" id="rsc-options-migration-threshold"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </meta_attributes>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++     </rsc_defaults>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:521   )   trace: register_fencing_topology: 	Updating vm3[1] (fencing) to F1
Nov 13 13:45:33 [465] vm3 stonith-ng: (  commands.c:970   )    info: stonith_level_remove: 	Node vm3 not found (0 active entries)
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:45:33 [465] vm3 stonith-ng: (  commands.c:937   )   trace: stonith_level_register: 	Added vm3 to the topology (1 active entries)
Nov 13 13:45:33 [465] vm3 stonith-ng: (  commands.c:948   )   trace: stonith_level_register: 	Adding device 'F1' for vm3 (1)
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section 'all': OK (rc=0, origin=vm1/cibadmin/2, version=0.6.1)
Nov 13 13:45:33 [465] vm3 stonith-ng: (  commands.c:952   )    info: stonith_level_register: 	Node vm3 has 1 active fencing levels
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   <rsc_location id="l1" rsc="pDummy" __crm_diff_marker__="added:top">
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="100" id="l1-rule">
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   </rsc_location>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   <rsc_location id="l2" rsc="F1" __crm_diff_marker__="added:top">
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="100" id="l2-rule">
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="100" id="l2-rule-0">
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm2" id="l2-expression-0"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     <rule score="-INFINITY" id="l2-rule-1">
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint       <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression-1"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint     </rule>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:758   )   trace: update_cib_stonith_devices: 	new constraint   </rsc_location>
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:787   )   trace: update_cib_stonith_devices: 	Fencing resource F1 was added or modified
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.6.1)
Nov 13 13:45:33 [469] vm3       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 12 : Parsing CIB options
Nov 13 13:45:33 [469] vm3       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:45:33 [469] vm3       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:795   )    info: update_cib_stonith_devices: 	Updating device list from the cib: new resource
Nov 13 13:45:33 [465] vm3 stonith-ng: (    unpack.c:98    )   debug: unpack_config: 	STONITH timeout: 60000
Nov 13 13:45:33 [465] vm3 stonith-ng: (    unpack.c:102   )   debug: unpack_config: 	STONITH of failed nodes is enabled
Nov 13 13:45:33 [465] vm3 stonith-ng: (    unpack.c:109   )   debug: unpack_config: 	Stop all active resources: false
Nov 13 13:45:33 [465] vm3 stonith-ng: (    unpack.c:113   )   debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Nov 13 13:45:33 [465] vm3 stonith-ng: (    unpack.c:118   )   debug: unpack_config: 	Default stickiness: 0
Nov 13 13:45:33 [465] vm3 stonith-ng: (    unpack.c:152   )   debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Nov 13 13:45:33 [465] vm3 stonith-ng: (    unpack.c:201   )   debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Nov 13 13:45:33 [465] vm3 stonith-ng: (    unpack.c:418   ) warning: handle_startup_fencing: 	Blind faith: not fencing unseen nodes
Nov 13 13:45:33 [465] vm3 stonith-ng: (    unpack.c:509   )   debug: unpack_domains: 	Unpacking domains
Nov 13 13:45:33 [465] vm3 stonith-ng: (      main.c:652   )    info: cib_device_update: 	Device F1 has been disabled on vm3: score=-INFINITY
Nov 13 13:45:33 [464] vm3        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-2.raw
Nov 13 13:45:33 [464] vm3        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:45:33 [464] vm3        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.6.0 of the CIB to disk (digest: 2db643db6cb3c3f1825600265949deb4)
Nov 13 13:45:33 [469] vm3       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-3
Nov 13 13:45:33 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [469] vm3       crmd: (     utils.c:981   )    info: update_dc: 	Set DC to vm1 (3.0.8)
Nov 13 13:45:33 [469] vm3       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:45:33 [469] vm3       crmd: (  election.c:204   )   debug: crm_compare_age: 	Loose: 0.31995 vs 0.53991 (usec)
Nov 13 13:45:33 [469] vm3       crmd: (  election.c:511   )    info: election_count_vote: 	Election 4 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:45:33 [469] vm3       crmd: (     utils.c:984   )    info: update_dc: 	Unset DC. Was vm1
Nov 13 13:45:33 [469] vm3       crmd: (  election.c:91    )   debug: do_election_check: 	Ignore election check: we not in an election
Nov 13 13:45:33 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:45:33 [469] vm3       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Nov 13 13:45:33 [469] vm3       crmd: (     utils.c:192   )   debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=29
Nov 13 13:45:33 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.7.1
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="6" admin_epoch="0" num_updates="1"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="7" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-2383f6c"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:33 [469] vm3       crmd: (  messages.c:729   )   debug: handle_request: 	Raising I_JOIN_OFFER: join-4
Nov 13 13:45:33 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [469] vm3       crmd: (     utils.c:981   )    info: update_dc: 	Set DC to vm1 (3.0.8)
Nov 13 13:45:33 [469] vm3       crmd: (join_client.:135   )   debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Nov 13 13:45:33 [464] vm3        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 2db643db6cb3c3f1825600265949deb4 to disk
Nov 13 13:45:33 [464] vm3        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.2qZrBi (digest: /var/lib/pacemaker/cib/cib.T7iEhI)
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/39, version=0.7.1)
Nov 13 13:45:33 [464] vm3        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.2qZrBi
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.7.1)
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/14, version=0.7.1)
Nov 13 13:45:33 [469] vm3       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 14 : Parsing CIB options
Nov 13 13:45:33 [469] vm3       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:45:33 [469] vm3       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/15, version=0.7.1)
Nov 13 13:45:33 [469] vm3       crmd: (join_client.:157   )   debug: join_query_callback: 	Respond to join offer join-4
Nov 13 13:45:33 [469] vm3       crmd: (join_client.:158   )   debug: join_query_callback: 	Acknowledging vm1 as our DC
Nov 13 13:45:33 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_apply_diff op
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:174   )   debug: log_cib_diff: 	Config update: Local-only Change: 0.8.1
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib epoch="7" admin_epoch="0" num_updates="1"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <configuration>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <crm_config>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </cluster_property_set>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </crm_config>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </configuration>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/41, version=0.8.1)
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/16, version=0.8.1)
Nov 13 13:45:33 [469] vm3       crmd: (   control.c:915   )   debug: config_query_callback: 	Call 16 : Parsing CIB options
Nov 13 13:45:33 [469] vm3       crmd: (   control.c:939   )   debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Nov 13 13:45:33 [469] vm3       crmd: (   control.c:946   )   debug: config_query_callback: 	Checking for expired actions every 900000ms
Nov 13 13:45:33 [464] vm3        cib: (   cib_ops.c:222   )    info: cib_process_replace: 	Digest matched on replace from vm1: b65668c649a0f8a465a42db6c017bc19
Nov 13 13:45:33 [469] vm3       crmd: (  messages.c:733   )   debug: handle_request: 	Raising I_JOIN_RESULT: join-4
Nov 13 13:45:33 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Nov 13 13:45:33 [469] vm3       crmd: (join_client.:231   )   debug: do_cl_join_finalize_respond: 	Confirming join join-4: join_ack_nack
Nov 13 13:45:33 [469] vm3       crmd: (join_client.:240   )   debug: do_cl_join_finalize_respond: 	join-4: Join complete.  Sending local LRM status to vm1
Nov 13 13:45:33 [469] vm3       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: (null)=(null) for localhost
Nov 13 13:45:33 [469] vm3       crmd: (       fsa.c:193   )   debug: s_crmd_fsa: 	Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 13 13:45:33 [469] vm3       crmd: (      misc.c:47    )    info: do_log: 	FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Nov 13 13:45:33 [469] vm3       crmd: (       fsa.c:502   )  notice: do_state_transition: 	State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 13 13:45:33 [467] vm3      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 469 (0x1cd1010)
Nov 13 13:45:33 [467] vm3      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="refresh" attr_section="status" attr_is_remote="0"/>
Nov 13 13:45:33 [464] vm3        cib: (   cib_ops.c:258   )    info: cib_process_replace: 	Replaced 0.8.1 with 0.8.1 from vm1
Nov 13 13:45:33 [464] vm3        cib: (        io.c:596   )   debug: activateCibXml: 	Triggering CIB write for cib_replace op
Nov 13 13:45:33 [464] vm3        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-3.raw
Nov 13 13:45:33 [464] vm3        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_replace operation for section 'all': OK (rc=0, origin=vm1/crmd/47, version=0.8.1)
Nov 13 13:45:33 [464] vm3        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.8.0 of the CIB to disk (digest: 9db35554f5ac4e48336f1bae33d89abc)
Nov 13 13:45:33 [464] vm3        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 9db35554f5ac4e48336f1bae33d89abc to disk
Nov 13 13:45:33 [464] vm3        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.ivmZKn (digest: /var/lib/pacemaker/cib/cib.fbDlGN)
Nov 13 13:45:33 [464] vm3        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.ivmZKn
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.1
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.2 761ef3207c9a00ca3a190046e551df6b
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="1">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261519">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       <lrm id="3232261519">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <lrm_resources/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       </lrm>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="8" num_updates="2" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section //node_state[@uname='vm3']/lrm: OK (rc=0, origin=vm1/crmd/51, version=0.8.2)
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.2
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.3 70b3476c40002d2b8afe79070f45ed65
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="2"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="3" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261519">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/52, version=0.8.3)
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.3
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.4 6223769ad880e2cfd731d4ae34ea4603
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="3">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261517">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       <lrm id="3232261517">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <lrm_resources/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       </lrm>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="8" num_updates="4" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=vm1/crmd/53, version=0.8.4)
Nov 13 13:45:33 [464] vm3        cib: (        io.c:738   )    info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-4.raw
Nov 13 13:45:33 [464] vm3        cib: (        io.c:748   )   debug: write_cib_contents: 	Writing CIB to disk
Nov 13 13:45:33 [464] vm3        cib: (        io.c:773   )    info: write_cib_contents: 	Wrote version 0.8.0 of the CIB to disk (digest: 9db35554f5ac4e48336f1bae33d89abc)
Nov 13 13:45:33 [464] vm3        cib: (        io.c:781   )   debug: write_cib_contents: 	Wrote digest 9db35554f5ac4e48336f1bae33d89abc to disk
Nov 13 13:45:33 [464] vm3        cib: (        io.c:259   )    info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.yntCRv (digest: /var/lib/pacemaker/cib/cib.cKNXXV)
Nov 13 13:45:33 [464] vm3        cib: (        io.c:786   )   debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.yntCRv
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.4
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.5 ec848f93df58e6ea8292c890ceeba4d9
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="4"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="5" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261517">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/54, version=0.8.5)
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.5
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.6 b9a94f1abf0121139641067408b3dbe0
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="5">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261518">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       <lrm id="3232261518">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--         <lrm_resources/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--       </lrm>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++ <cib epoch="8" num_updates="6" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section //node_state[@uname='vm2']/lrm: OK (rc=0, origin=vm1/crmd/55, version=0.8.6)
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.6
Nov 13 13:45:33 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.7 8afde5af943ecd2a85a00f392002038c
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="6"/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="7" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       <lrm id="3232261518">
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++         <lrm_resources/>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++       </lrm>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:33 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:33 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/56, version=0.8.7)
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1072  )    info: process_lrmd_get_rsc_info: 	Resource 'F1' not found (0 active resources)
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 28
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1047  )    info: process_lrmd_rsc_register: 	Added 'F1' to the rsc list (1 active resources)
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_register operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=0, reply=1, notify=1, exit=4201920
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 29
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d)
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 30
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [469] vm3       crmd: (       lrm.c:1784  )    info: do_lrm_rsc_op: 	Performing key=10:1:7:154fb289-24e8-407e-9a03-69a510480b60 op=F1_monitor_0
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=5, reply=1, notify=0, exit=4201920
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 31
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:122   )   debug: log_execute: 	executing - rsc:F1 action:monitor call_id:5
Nov 13 13:45:35 [465] vm3 stonith-ng: (       ipc.c:327   )    info: crm_client_new: 	Connecting 0x1b3c150 for uid=0 gid=0 pid=466 id=68ee14d5-71bb-4403-b659-fc37fabfd715
Nov 13 13:45:35 [465] vm3 stonith-ng: ( ipc_setup.c:484   )   debug: handle_new_connection: 	IPC credentials authenticated (465-466-10)
Nov 13 13:45:35 [465] vm3 stonith-ng: (   ipc_shm.c:295   )   debug: qb_ipcs_shm_connect: 	connecting to client [466]
Nov 13 13:45:35 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [465] vm3 stonith-ng: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [466] vm3       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [466] vm3       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [466] vm3       lrmd: (ringbuffer.c:236   )   debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Nov 13 13:45:35 [465] vm3 stonith-ng: (      main.c:87    )   trace: st_ipc_created: 	Connection created for 0x1b3c150
Nov 13 13:45:35 [465] vm3 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 1 from lrmd.466
Nov 13 13:45:35 [465] vm3 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_command t="stonith-ng" st_op="register" st_clientname="lrmd.466" st_clientid="68ee14d5-71bb-4403-b659-fc37fabfd715" st_clientnode="vm3"/>
Nov 13 13:45:35 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing register 1 from lrmd.466 (               0)
Nov 13 13:45:35 [466] vm3       lrmd: ( st_client.c:1639  )   debug: stonith_api_signon: 	Connection to STONITH successful
Nov 13 13:45:35 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed register from lrmd.466: OK (0)
Nov 13 13:45:35 [465] vm3 stonith-ng: (      main.c:121   )   trace: st_ipc_dispatch: 	Flags 512/0 for command 2 from lrmd.466
Nov 13 13:45:35 [465] vm3 stonith-ng: (      main.c:133   )   trace: st_ipc_dispatch: 	Client[inbound]   <stonith_set_notification st_op="st_notify" st_notify_activate="st_notify_disconnect" st_clientid="68ee14d5-71bb-4403-b659-fc37fabfd715" st_clientname="lrmd.466" st_clientnode="vm3"/>
Nov 13 13:45:35 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify 2 from lrmd.466 (               0)
Nov 13 13:45:35 [465] vm3 stonith-ng: (  commands.c:1822  )   debug: handle_request: 	Setting st_notify_disconnect callbacks for lrmd.466 (68ee14d5-71bb-4403-b659-fc37fabfd715): ON
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:F1 action:monitor call_id:5  exit-code:7 exec-time:15ms queue-time:0ms
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d)
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1072  )    info: process_lrmd_get_rsc_info: 	Resource 'pDummy' not found (1 active resources)
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 32
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1047  )    info: process_lrmd_rsc_register: 	Added 'pDummy' to the rsc list (2 active resources)
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_register operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=0, reply=1, notify=1, exit=4201920
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 33
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d)
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 34
Nov 13 13:45:35 [469] vm3       crmd: (       lrm.c:1784  )    info: do_lrm_rsc_op: 	Performing key=11:1:7:154fb289-24e8-407e-9a03-69a510480b60 op=pDummy_monitor_0
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=0, reply=0, notify=0, exit=4201920
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=9, reply=1, notify=0, exit=4201920
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 35
Nov 13 13:45:35 [469] vm3       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource F1 after monitor op complete (interval=0)
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:122   )   debug: log_execute: 	executing - rsc:pDummy action:monitor call_id:9
Nov 13 13:45:35 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify from lrmd.466: OK (0)
Dummy(pDummy)[490]:	2013/11/13_13:45:35 DEBUG: pDummy monitor : 7
Nov 13 13:45:35 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_0:490 - exited with rc=7
Nov 13 13:45:35 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_0:490:stderr [ -- empty -- ]
Nov 13 13:45:35 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_0:490:stdout [ -- empty -- ]
Nov 13 13:45:35 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:9 pid:490 exit-code:7 exec-time:130ms queue-time:2ms
Nov 13 13:45:35 [466] vm3       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d)
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.7
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.8 26ed7a0b48fc4ae623a5aee9a3d14dcf
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="7"/>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="8" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261518">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="7:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;7:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="13" queue-time="0" op-digest="28866
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/17, version=0.8.8)
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.8
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.9 a620ba287b7786990c988e5680eea772
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="8"/>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="9" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="4:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;4:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="23" queue-time="0" op-digest="28866
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/61, version=0.8.9)
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/crmd/17)
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:2101  )    info: process_lrm_event: 	LRM operation F1_monitor_0 (call=5, rc=7, cib-update=17, confirmed=true) not running
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'F1' with monitor op
Nov 13 13:45:36 [469] vm3       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource pDummy after monitor op complete (interval=0)
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.9
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.10 1bf63487cfb2465e5f9305b2b310410c
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="9"/>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="10" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="10:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;10:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="15" queue-time="0" op-digest="288
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/17, version=0.8.10)
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.10
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.11 66f83af99c165cdf0a74a520b9474f1b
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="10"/>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="11" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261518">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="8:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;8:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="120" queue-time="2" op-dige
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/18, version=0.8.11)
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.11
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.12 69b87249349fec166963d00574c1e8d9
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="11"/>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="12" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="5:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;5:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="121" queue-time="2" op-dige
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/62, version=0.8.12)
Nov 13 13:45:36 [467] vm3      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged probe_complete[vm1] from vm1 is true
Nov 13 13:45:36 [467] vm3      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged probe_complete[vm2] from vm2 is true
Nov 13 13:45:36 [469] vm3       crmd: (services_lin:604   )    info: services_os_action_execute: 	Managed Dummy_meta-data_0 process 512 exited with rc=0
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:565   )   debug: get_rsc_restart_list: 	Attr state is not reloadable
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:565   )   debug: get_rsc_restart_list: 	Attr op_sleep is not reloadable
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:2101  )  notice: process_lrm_event: 	LRM operation pDummy_monitor_0 (call=9, rc=7, cib-update=18, confirmed=true) not running
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'pDummy' with monitor op
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/crmd/18)
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.12
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.13 92a008f0ef62d500c66d49ae54262e4e
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="12"/>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="13" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="11:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;11:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1384317935" last-rc-change="1384317935" exec-time="130" queue-time="2" op-di
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           </lrm_resource>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [467] vm3      attrd: (      main.c:221   )   trace: attrd_ipc_dispatch: 	Processing msg from 469 (0x1cd1010)
Nov 13 13:45:36 [467] vm3      attrd: (      main.c:222   )   trace: attrd_ipc_dispatch: 	attrd_ipc_dispatch   <attrd_update_delegate t="attrd" src="crmd" task="update" attr_name="probe_complete" attr_value="true" attr_section="status" attr_host="vm3" attr_is_remote="0"/>
Nov 13 13:45:36 [467] vm3      attrd: (  commands.c:231   )    info: attrd_client_message: 	Broadcasting probe_complete[vm3] = true
Nov 13 13:45:36 [469] vm3       crmd: (     utils.c:2001  )   debug: attrd_update_delegate: 	Sent update: probe_complete=true for vm3
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:1780  )   debug: do_lrm_rsc_op: 	Stopped 0 recurring operations in preparation for pDummy_start_0
Nov 13 13:45:36 [467] vm3      attrd: (  commands.c:423   )   trace: attrd_peer_update: 	Unchanged probe_complete[vm3] from vm3 is true
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/18, version=0.8.13)
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:1784  )    info: do_lrm_rsc_op: 	Performing key=14:1:0:154fb289-24e8-407e-9a03-69a510480b60 op=pDummy_start_0
Nov 13 13:45:36 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=10, reply=1, notify=0, exit=4201920
Nov 13 13:45:36 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 39
Nov 13 13:45:36 [466] vm3       lrmd: (      lrmd.c:122   )    info: log_execute: 	executing - rsc:pDummy action:start call_id:10
Dummy(pDummy)[516]:	2013/11/13_13:45:36 DEBUG: pDummy start : 0
Nov 13 13:45:36 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_start_0:516 - exited with rc=0
Nov 13 13:45:36 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_start_0:516:stderr [ -- empty -- ]
Nov 13 13:45:36 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_start_0:516:stdout [ -- empty -- ]
Nov 13 13:45:36 [466] vm3       lrmd: (      lrmd.c:104   )    info: log_finished: 	finished - rsc:pDummy action:start call_id:10 pid:516 exit-code:0 exec-time:51ms queue-time:0ms
Nov 13 13:45:36 [466] vm3       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d)
Nov 13 13:45:36 [469] vm3       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource pDummy after start op complete (interval=0)
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:2101  )  notice: process_lrm_event: 	LRM operation pDummy_start_0 (call=10, rc=0, cib-update=19, confirmed=true) ok
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'pDummy' with start op
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/crmd/19)
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:1784  )    info: do_lrm_rsc_op: 	Performing key=15:1:0:154fb289-24e8-407e-9a03-69a510480b60 op=pDummy_monitor_10000
Nov 13 13:45:36 [466] vm3       lrmd: (      lrmd.c:1313  )   debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from d1a560c5-bbbd-475e-a3cb-6fca0227063d: rc=11, reply=1, notify=0, exit=4201920
Nov 13 13:45:36 [466] vm3       lrmd: (      main.c:179   )   trace: lrmd_server_send_reply: 	sending reply to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d) with msg id 41
Nov 13 13:45:36 [466] vm3       lrmd: (      lrmd.c:122   )   debug: log_execute: 	executing - rsc:pDummy action:monitor call_id:11
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.13
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.14 5fa7d2824d05d801e557350c4ebe869b
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="13">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261519">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        <lrm id="3232261519">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          <lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            <lrm_resource id="pDummy">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--             <lrm_rsc_op operation_key="pDummy_monitor_0" operation="monitor" transition-key="11:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;11:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="9" rc-code="7" last-run="1384317935" last-rc-change="1384317935" exec-time="130" queue-time="2" id="pDummy_last_0"/>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            </lrm_resource>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          </lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        </lrm>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="14" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="14:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;14:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="10" rc-code="0" op-status="0" interval="0" last-run="1384317936" last-rc-change="1384317936" exec-time="51" queue-time="0" op-digest
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/19, version=0.8.14)
Dummy(pDummy)[525]:	2013/11/13_13:45:36 DEBUG: pDummy monitor : 0
Nov 13 13:45:36 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:525 - exited with rc=0
Nov 13 13:45:36 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:525:stderr [ -- empty -- ]
Nov 13 13:45:36 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:525:stdout [ -- empty -- ]
Nov 13 13:45:36 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:525 exit-code:0 exec-time:48ms queue-time:1ms
Nov 13 13:45:36 [466] vm3       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d)
Nov 13 13:45:36 [469] vm3       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource pDummy after monitor op complete (interval=10000)
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:2101  )  notice: process_lrm_event: 	LRM operation pDummy_monitor_10000 (call=11, rc=0, cib-update=20, confirmed=false) ok
Nov 13 13:45:36 [469] vm3       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'pDummy' with monitor op
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/crmd/20)
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.14
Nov 13 13:45:36 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.15 c644ff6f35c372b7784ca430760c5a21
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="14"/>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="15" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_monitor_10000" operation_key="pDummy_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="15:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;15:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="11" rc-code="0" op-status="0" interval="10000" last-rc-change="1384317936" exec-time="48" queue-time="1" op-digest="5
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:36 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:36 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/20, version=0.8.15)
Nov 13 13:45:38 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.15
Nov 13 13:45:38 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.16 d2057dbbd6d1a45d7f5bc3432ef649f3
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  <cib num_updates="15">
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    <status>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      <node_state id="3232261517">
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        <lrm id="3232261517">
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          <lrm_resources>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            <lrm_resource id="F1">
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	--             <lrm_rsc_op operation_key="F1_monitor_0" operation="monitor" transition-key="4:1:7:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;4:1:7:154fb289-24e8-407e-9a03-69a510480b60" call-id="5" rc-code="7" last-run="1384317935" last-rc-change="1384317935" exec-time="23" id="F1_last_0"/>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-            </lrm_resource>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-          </lrm_resources>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-        </lrm>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-      </node_state>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-    </status>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-  </cib>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="16" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_last_0" operation_key="F1_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="12:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;12:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="10" rc-code="0" op-status="0" interval="0" last-run="1384317936" last-rc-change="1384317936" exec-time="2305" queue-time="0" op-digest="2886
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:38 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:38 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/63, version=0.8.16)
Nov 13 13:45:39 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.16
Nov 13 13:45:39 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.17 a97e80b9595cae69da19fce0899b09d9
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="16"/>
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="17" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261517">
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="F1" type="external/libvirt" class="stonith">
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="F1_monitor_3600000" operation_key="F1_monitor_3600000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="13:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:0;13:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="11" rc-code="0" op-status="0" interval="3600000" last-rc-change="1384317938" exec-time="1312" queue-time="0" op-digest="6
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:45:39 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:45:39 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/64, version=0.8.17)
Nov 13 13:45:46 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[552]:	2013/11/13_13:45:46 DEBUG: pDummy monitor : 0
Nov 13 13:45:46 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:552 - exited with rc=0
Nov 13 13:45:46 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:552:stderr [ -- empty -- ]
Nov 13 13:45:46 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:552:stdout [ -- empty -- ]
Nov 13 13:45:46 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:552 exit-code:0 exec-time:0ms queue-time:0ms
Nov 13 13:45:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 10 ticks in 30s is 0.003333 (@100 tps)
Nov 13 13:45:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 560)
Nov 13 13:45:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:45:56 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[561]:	2013/11/13_13:45:56 DEBUG: pDummy monitor : 0
Nov 13 13:45:56 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:561 - exited with rc=0
Nov 13 13:45:56 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:561:stderr [ -- empty -- ]
Nov 13 13:45:56 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:561:stdout [ -- empty -- ]
Nov 13 13:45:56 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:561 exit-code:0 exec-time:0ms queue-time:0ms
Nov 13 13:46:06 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[569]:	2013/11/13_13:46:06 DEBUG: pDummy monitor : 0
Nov 13 13:46:06 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:569 - exited with rc=0
Nov 13 13:46:06 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:569:stderr [ -- empty -- ]
Nov 13 13:46:06 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:569:stdout [ -- empty -- ]
Nov 13 13:46:06 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:569 exit-code:0 exec-time:0ms queue-time:0ms
Nov 13 13:46:16 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[577]:	2013/11/13_13:46:16 DEBUG: pDummy monitor : 0
Nov 13 13:46:16 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:577 - exited with rc=0
Nov 13 13:46:16 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:577:stderr [ -- empty -- ]
Nov 13 13:46:16 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:577:stdout [ -- empty -- ]
Nov 13 13:46:16 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:577 exit-code:0 exec-time:0ms queue-time:0ms
Nov 13 13:46:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:46:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 585)
Nov 13 13:46:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:46:26 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[586]:	2013/11/13_13:46:26 DEBUG: pDummy monitor : 0
Nov 13 13:46:26 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:586 - exited with rc=0
Nov 13 13:46:26 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:586:stderr [ -- empty -- ]
Nov 13 13:46:26 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:586:stdout [ -- empty -- ]
Nov 13 13:46:26 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:586 exit-code:0 exec-time:0ms queue-time:0ms
Nov 13 13:46:36 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[600]:	2013/11/13_13:46:36 DEBUG: pDummy monitor : 0
Nov 13 13:46:36 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:600 - exited with rc=0
Nov 13 13:46:36 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:600:stderr [ -- empty -- ]
Nov 13 13:46:36 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:600:stdout [ -- empty -- ]
Nov 13 13:46:36 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:600 exit-code:0 exec-time:0ms queue-time:0ms
Nov 13 13:46:46 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[608]:	2013/11/13_13:46:46 DEBUG: pDummy monitor : 0
Nov 13 13:46:46 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:608 - exited with rc=0
Nov 13 13:46:46 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:608:stderr [ -- empty -- ]
Nov 13 13:46:46 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:608:stdout [ -- empty -- ]
Nov 13 13:46:46 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:608 exit-code:0 exec-time:0ms queue-time:0ms
Nov 13 13:46:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:46:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 615)
Nov 13 13:46:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:46:56 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[616]:	2013/11/13_13:46:56 DEBUG: pDummy monitor : 0
Nov 13 13:46:56 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:616 - exited with rc=0
Nov 13 13:46:56 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:616:stderr [ -- empty -- ]
Nov 13 13:46:56 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:616:stdout [ -- empty -- ]
Nov 13 13:46:56 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:616 exit-code:0 exec-time:0ms queue-time:0ms
Nov 13 13:47:06 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[661]:	2013/11/13_13:47:06 DEBUG: pDummy monitor : 7
Nov 13 13:47:06 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:661 - exited with rc=7
Nov 13 13:47:06 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:661:stderr [ -- empty -- ]
Nov 13 13:47:06 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:661:stdout [ -- empty -- ]
Nov 13 13:47:06 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:661 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:47:06 [466] vm3       lrmd: (      main.c:196   )   trace: lrmd_server_send_notify: 	sending notify to client (d1a560c5-bbbd-475e-a3cb-6fca0227063d)
Nov 13 13:47:06 [469] vm3       crmd: (     utils.c:2104  )   debug: create_operation_update: 	do_update_resource: Updating resource pDummy after monitor op complete (interval=10000)
Nov 13 13:47:06 [469] vm3       crmd: (       lrm.c:2101  )  notice: process_lrm_event: 	LRM operation pDummy_monitor_10000 (call=11, rc=7, cib-update=21, confirmed=false) not running
Nov 13 13:47:06 [469] vm3       crmd: (       lrm.c:122   )   debug: update_history_cache: 	Updating history for 'pDummy' with monitor op
Nov 13 13:47:06 [464] vm3        cib: ( callbacks.c:688   )    info: cib_process_request: 	Forwarding cib_modify operation for section status to master (origin=local/crmd/21)
Nov 13 13:47:06 [467] vm3      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute fail-count-pDummy with no delay
Nov 13 13:47:06 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting fail-count-pDummy[vm3] to 1 from vm1
Nov 13 13:47:06 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out fail-count-pDummy, we are in state 2
Nov 13 13:47:06 [467] vm3      attrd: (  commands.c:146   )   trace: create_attribute: 	Created attribute last-failure-pDummy with no delay
Nov 13 13:47:06 [467] vm3      attrd: (  commands.c:382   )   trace: attrd_peer_update: 	Setting last-failure-pDummy[vm3] to 1384318026 from vm1
Nov 13 13:47:06 [467] vm3      attrd: (  commands.c:466   )   trace: write_or_elect_attribute: 	vm2 will write out last-failure-pDummy, we are in state 2
Nov 13 13:47:06 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.17
Nov 13 13:47:06 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.18 a51e1a3b91717c93641fe986a68f690b
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="17"/>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="18" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <lrm id="3232261519">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <lrm_resources>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_failure_0" operation_key="pDummy_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="15:1:0:154fb289-24e8-407e-9a03-69a510480b60" transition-magic="0:7;15:1:0:154fb289-24e8-407e-9a03-69a510480b60" call-id="11" rc-code="7" op-status="0" interval="10000" last-rc-change="1384318026" exec-time="0" queue-time="0" op-digest="5
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+            </lrm_resource>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </lrm_resources>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </lrm>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:47:06 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/21, version=0.8.18)
Nov 13 13:47:06 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.18
Nov 13 13:47:06 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.19 5197717035f45f7bbfddb1efd89c2360
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="18"/>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="19" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261519">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261519">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261519-fail-count-pDummy" name="fail-count-pDummy" value="1"/>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:47:06 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/6, version=0.8.19)
Nov 13 13:47:06 [465] vm3 stonith-ng: ( cib_utils.c:167   )   debug: Config update: 	Diff: --- 0.8.19
Nov 13 13:47:06 [465] vm3 stonith-ng: ( cib_utils.c:169   )   debug: Config update: 	Diff: +++ 0.8.20 20d147ac83adce3d53784ce1a7e6304d
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1496  )   debug: Config update: 	-- <cib num_updates="19"/>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  <cib epoch="8" num_updates="20" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.8" cib-last-written="Wed Nov 13 13:45:33 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    <status>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        <transient_attributes id="3232261519">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          <instance_attributes id="status-3232261519">
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	++           <nvpair id="status-3232261519-last-failure-pDummy" name="last-failure-pDummy" value="1384318026"/>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+          </instance_attributes>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+        </transient_attributes>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+      </node_state>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+    </status>
Nov 13 13:47:06 [465] vm3 stonith-ng: (       xml.c:1507  )   debug: Config update: 	+  </cib>
Nov 13 13:47:06 [464] vm3        cib: ( callbacks.c:761   )    info: cib_process_request: 	Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/7, version=0.8.20)
Nov 13 13:47:08 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_op="st_query" st_callid="2" st_callopt="0" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:08 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:08 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 696fb2c3-e11a-4124-ba9b-bafc9ab28426
Nov 13 13:47:08 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 696fb2c3-e11a-4124-ba9b-bafc9ab28426 - reboot of vm3 for crmd.15883
Nov 13 13:47:08 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:08 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_op="st_query" st_callid="2" st_callopt="0" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:08 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:08 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:47:08 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:08 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="1" src="vm1">
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:12 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:12 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@696fb2c3-e11a-4124-ba9b-bafc9ab28426.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:12 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.696fb2c3: Generic Pacemaker error
Nov 13 13:47:12 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:12 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:12 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=696fb2c3-e11a-4124-ba9b-bafc9ab28426) by client crmd.15883
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:47:12 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:12 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:14 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="431c7488-013e-4900-bde7-a3ce154b35a3" st_op="st_query" st_callid="3" st_callopt="0" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:14 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:14 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 431c7488-013e-4900-bde7-a3ce154b35a3
Nov 13 13:47:14 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 431c7488-013e-4900-bde7-a3ce154b35a3 - reboot of vm3 for crmd.15883
Nov 13 13:47:14 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:14 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="431c7488-013e-4900-bde7-a3ce154b35a3" st_op="st_query" st_callid="3" st_callopt="0" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:14 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:14 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:47:14 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:14 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:16 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[669]:	2013/11/13_13:47:16 DEBUG: pDummy monitor : 7
Nov 13 13:47:16 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:669 - exited with rc=7
Nov 13 13:47:16 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:669:stderr [ -- empty -- ]
Nov 13 13:47:16 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:669:stdout [ -- empty -- ]
Nov 13 13:47:16 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:669 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="2" src="vm1">
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="431c7488-013e-4900-bde7-a3ce154b35a3" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:17 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:17 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@431c7488-013e-4900-bde7-a3ce154b35a3.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:17 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.431c7488: Generic Pacemaker error
Nov 13 13:47:17 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:17 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:17 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=431c7488-013e-4900-bde7-a3ce154b35a3) by client crmd.15883
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:47:17 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:17 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:19 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="682bdc12-35a4-431a-8773-4862cc8c39ef" st_op="st_query" st_callid="4" st_callopt="0" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:19 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:19 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 682bdc12-35a4-431a-8773-4862cc8c39ef
Nov 13 13:47:19 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 682bdc12-35a4-431a-8773-4862cc8c39ef - reboot of vm3 for crmd.15883
Nov 13 13:47:19 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:19 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="682bdc12-35a4-431a-8773-4862cc8c39ef" st_op="st_query" st_callid="4" st_callopt="0" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:19 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:19 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:47:19 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:19 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 2 ticks in 30s is 0.000667 (@100 tps)
Nov 13 13:47:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 676)
Nov 13 13:47:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="3" src="vm1">
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="682bdc12-35a4-431a-8773-4862cc8c39ef" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:22 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:22 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@682bdc12-35a4-431a-8773-4862cc8c39ef.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:22 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.682bdc12: Generic Pacemaker error
Nov 13 13:47:22 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:22 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:22 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=682bdc12-35a4-431a-8773-4862cc8c39ef) by client crmd.15883
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:47:22 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:22 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:24 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_op="st_query" st_callid="5" st_callopt="0" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:24 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:24 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created d761e73f-f337-48cc-b2a1-5b2d722d2738
Nov 13 13:47:24 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: d761e73f-f337-48cc-b2a1-5b2d722d2738 - reboot of vm3 for crmd.15883
Nov 13 13:47:24 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:24 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_op="st_query" st_callid="5" st_callopt="0" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:24 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:24 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:47:24 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:24 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:26 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[677]:	2013/11/13_13:47:26 DEBUG: pDummy monitor : 7
Nov 13 13:47:26 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:677 - exited with rc=7
Nov 13 13:47:26 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:677:stderr [ -- empty -- ]
Nov 13 13:47:26 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:677:stdout [ -- empty -- ]
Nov 13 13:47:26 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:677 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="4" src="vm1">
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="d761e73f-f337-48cc-b2a1-5b2d722d2738" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:27 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:27 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@d761e73f-f337-48cc-b2a1-5b2d722d2738.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:27 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.d761e73f: Generic Pacemaker error
Nov 13 13:47:27 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:27 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:27 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=d761e73f-f337-48cc-b2a1-5b2d722d2738) by client crmd.15883
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:47:27 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:27 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:29 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_op="st_query" st_callid="6" st_callopt="0" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:29 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:29 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 11df91ab-fc81-43aa-941d-ffa1204df1c9
Nov 13 13:47:29 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 11df91ab-fc81-43aa-941d-ffa1204df1c9 - reboot of vm3 for crmd.15883
Nov 13 13:47:29 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:29 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_op="st_query" st_callid="6" st_callopt="0" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:29 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:29 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:47:29 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:29 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="5" src="vm1">
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm1" st_remote_op="11df91ab-fc81-43aa-941d-ffa1204df1c9" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:33 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:33 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@11df91ab-fc81-43aa-941d-ffa1204df1c9.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:33 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.11df91ab: Generic Pacemaker error
Nov 13 13:47:33 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:33 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:33 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=11df91ab-fc81-43aa-941d-ffa1204df1c9) by client crmd.15883
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:47:33 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:33 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:35 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="84777767-aa8b-4e04-8dec-b26dae36aaff" st_op="st_query" st_callid="7" st_callopt="0" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:35 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:35 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 84777767-aa8b-4e04-8dec-b26dae36aaff
Nov 13 13:47:35 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 84777767-aa8b-4e04-8dec-b26dae36aaff - reboot of vm3 for crmd.15883
Nov 13 13:47:35 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:35 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="84777767-aa8b-4e04-8dec-b26dae36aaff" st_op="st_query" st_callid="7" st_callopt="0" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:35 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:35 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:47:35 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:35 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:36 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[695]:	2013/11/13_13:47:36 DEBUG: pDummy monitor : 7
Nov 13 13:47:36 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:695 - exited with rc=7
Nov 13 13:47:36 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:695:stderr [ -- empty -- ]
Nov 13 13:47:36 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:695:stdout [ -- empty -- ]
Nov 13 13:47:36 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:695 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="6" src="vm1">
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="84777767-aa8b-4e04-8dec-b26dae36aaff" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:38 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:38 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@84777767-aa8b-4e04-8dec-b26dae36aaff.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:38 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.84777767: Generic Pacemaker error
Nov 13 13:47:38 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:38 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:38 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=84777767-aa8b-4e04-8dec-b26dae36aaff) by client crmd.15883
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:47:38 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:38 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:40 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_op="st_query" st_callid="8" st_callopt="0" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:40 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:40 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27
Nov 13 13:47:40 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27 - reboot of vm3 for crmd.15883
Nov 13 13:47:40 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:40 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_op="st_query" st_callid="8" st_callopt="0" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:40 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:40 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:47:40 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:40 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="7" src="vm1">
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:43 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:43 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:43 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.588ca7d3: Generic Pacemaker error
Nov 13 13:47:43 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:43 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:43 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27) by client crmd.15883
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:47:43 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:43 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:45 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_op="st_query" st_callid="9" st_callopt="0" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:45 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:45 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created a3379e0c-d206-4ced-9e7e-1c915f08a0ae
Nov 13 13:47:45 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: a3379e0c-d206-4ced-9e7e-1c915f08a0ae - reboot of vm3 for crmd.15883
Nov 13 13:47:45 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:45 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_op="st_query" st_callid="9" st_callopt="0" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:45 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:45 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:47:45 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:45 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:46 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[704]:	2013/11/13_13:47:46 DEBUG: pDummy monitor : 7
Nov 13 13:47:46 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:704 - exited with rc=7
Nov 13 13:47:46 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:704:stderr [ -- empty -- ]
Nov 13 13:47:46 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:704:stdout [ -- empty -- ]
Nov 13 13:47:46 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:704 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="8" src="vm1">
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="a3379e0c-d206-4ced-9e7e-1c915f08a0ae" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:48 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:48 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@a3379e0c-d206-4ced-9e7e-1c915f08a0ae.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:48 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.a3379e0c: Generic Pacemaker error
Nov 13 13:47:48 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:48 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:48 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=a3379e0c-d206-4ced-9e7e-1c915f08a0ae) by client crmd.15883
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:47:48 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:48 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:50 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_op="st_query" st_callid="10" st_callopt="0" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:50 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:50 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 9ab4c26b-da3e-40cd-ba98-c89017db4953
Nov 13 13:47:50 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 9ab4c26b-da3e-40cd-ba98-c89017db4953 - reboot of vm3 for crmd.15883
Nov 13 13:47:50 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:50 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_op="st_query" st_callid="10" st_callopt="0" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:50 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:50 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:47:50 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:50 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:47:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 712)
Nov 13 13:47:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="9" src="vm1">
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="9ab4c26b-da3e-40cd-ba98-c89017db4953" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:54 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:54 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@9ab4c26b-da3e-40cd-ba98-c89017db4953.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:54 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.9ab4c26b: Generic Pacemaker error
Nov 13 13:47:54 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:54 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:54 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=9ab4c26b-da3e-40cd-ba98-c89017db4953) by client crmd.15883
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:47:54 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:54 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:56 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_op="st_query" st_callid="11" st_callopt="0" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:56 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:47:56 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 1ba836f2-328d-45c7-adbb-1db9b0a1ca4c
Nov 13 13:47:56 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 1ba836f2-328d-45c7-adbb-1db9b0a1ca4c - reboot of vm3 for crmd.15883
Nov 13 13:47:56 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:47:56 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_op="st_query" st_callid="11" st_callopt="0" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:56 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:56 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:47:56 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:47:56 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:47:56 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[713]:	2013/11/13_13:47:56 DEBUG: pDummy monitor : 7
Nov 13 13:47:56 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:713 - exited with rc=7
Nov 13 13:47:56 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:713:stderr [ -- empty -- ]
Nov 13 13:47:56 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:713:stdout [ -- empty -- ]
Nov 13 13:47:56 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:713 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="10" src="vm1">
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="1ba836f2-328d-45c7-adbb-1db9b0a1ca4c" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:47:59 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:47:59 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@1ba836f2-328d-45c7-adbb-1db9b0a1ca4c.vm1: Generic Pacemaker error (-201)
Nov 13 13:47:59 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.1ba836f2: Generic Pacemaker error
Nov 13 13:47:59 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:47:59 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:47:59 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=1ba836f2-328d-45c7-adbb-1db9b0a1ca4c) by client crmd.15883
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:47:59 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:47:59 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:48:01 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_op="st_query" st_callid="12" st_callopt="0" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:48:01 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 13:48:01 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 00825b71-24e3-4f14-a0b8-6945f050dfd1
Nov 13 13:48:01 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 00825b71-24e3-4f14-a0b8-6945f050dfd1 - reboot of vm3 for crmd.15883
Nov 13 13:48:01 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 13:48:01 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_op="st_query" st_callid="12" st_callopt="0" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:48:01 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:48:01 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 13:48:01 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 13:48:01 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="11" src="vm1">
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm2" st_remote_op="00825b71-24e3-4f14-a0b8-6945f050dfd1" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 13:48:04 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 13:48:04 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@00825b71-24e3-4f14-a0b8-6945f050dfd1.vm1: Generic Pacemaker error (-201)
Nov 13 13:48:04 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.00825b71: Generic Pacemaker error
Nov 13 13:48:04 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 13:48:04 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 13:48:04 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=00825b71-24e3-4f14-a0b8-6945f050dfd1) by client crmd.15883
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 13:48:04 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 13:48:04 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 13:48:06 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[740]:	2013/11/13_13:48:06 DEBUG: pDummy monitor : 7
Nov 13 13:48:06 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:740 - exited with rc=7
Nov 13 13:48:06 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:740:stderr [ -- empty -- ]
Nov 13 13:48:06 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:740:stdout [ -- empty -- ]
Nov 13 13:48:06 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:740 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:48:16 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[749]:	2013/11/13_13:48:17 DEBUG: pDummy monitor : 7
Nov 13 13:48:17 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:749 - exited with rc=7
Nov 13 13:48:17 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:749:stderr [ -- empty -- ]
Nov 13 13:48:17 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:749:stdout [ -- empty -- ]
Nov 13 13:48:17 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:749 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:48:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:48:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 775)
Nov 13 13:48:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:48:27 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[776]:	2013/11/13_13:48:27 DEBUG: pDummy monitor : 7
Nov 13 13:48:27 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:776 - exited with rc=7
Nov 13 13:48:27 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:776:stderr [ -- empty -- ]
Nov 13 13:48:27 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:776:stdout [ -- empty -- ]
Nov 13 13:48:27 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:776 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:48:37 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[790]:	2013/11/13_13:48:37 DEBUG: pDummy monitor : 7
Nov 13 13:48:37 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:790 - exited with rc=7
Nov 13 13:48:37 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:790:stderr [ -- empty -- ]
Nov 13 13:48:37 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:790:stdout [ -- empty -- ]
Nov 13 13:48:37 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:790 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:48:47 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[798]:	2013/11/13_13:48:47 DEBUG: pDummy monitor : 7
Nov 13 13:48:47 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:798 - exited with rc=7
Nov 13 13:48:47 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:798:stderr [ -- empty -- ]
Nov 13 13:48:47 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:798:stdout [ -- empty -- ]
Nov 13 13:48:47 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:798 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:48:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:48:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 823)
Nov 13 13:48:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:48:57 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[824]:	2013/11/13_13:48:57 DEBUG: pDummy monitor : 7
Nov 13 13:48:57 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:824 - exited with rc=7
Nov 13 13:48:57 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:824:stderr [ -- empty -- ]
Nov 13 13:48:57 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:824:stdout [ -- empty -- ]
Nov 13 13:48:57 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:824 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:49:07 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[832]:	2013/11/13_13:49:07 DEBUG: pDummy monitor : 7
Nov 13 13:49:07 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:832 - exited with rc=7
Nov 13 13:49:07 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:832:stderr [ -- empty -- ]
Nov 13 13:49:07 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:832:stdout [ -- empty -- ]
Nov 13 13:49:07 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:832 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:49:17 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[840]:	2013/11/13_13:49:17 DEBUG: pDummy monitor : 7
Nov 13 13:49:17 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:840 - exited with rc=7
Nov 13 13:49:17 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:840:stderr [ -- empty -- ]
Nov 13 13:49:17 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:840:stdout [ -- empty -- ]
Nov 13 13:49:17 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:840 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:49:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:49:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 847)
Nov 13 13:49:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:49:27 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[848]:	2013/11/13_13:49:27 DEBUG: pDummy monitor : 7
Nov 13 13:49:27 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:848 - exited with rc=7
Nov 13 13:49:27 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:848:stderr [ -- empty -- ]
Nov 13 13:49:27 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:848:stdout [ -- empty -- ]
Nov 13 13:49:27 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:848 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:49:37 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[863]:	2013/11/13_13:49:37 DEBUG: pDummy monitor : 7
Nov 13 13:49:37 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:863 - exited with rc=7
Nov 13 13:49:37 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:863:stderr [ -- empty -- ]
Nov 13 13:49:37 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:863:stdout [ -- empty -- ]
Nov 13 13:49:37 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:863 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:49:47 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[871]:	2013/11/13_13:49:47 DEBUG: pDummy monitor : 7
Nov 13 13:49:47 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:871 - exited with rc=7
Nov 13 13:49:47 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:871:stderr [ -- empty -- ]
Nov 13 13:49:47 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:871:stdout [ -- empty -- ]
Nov 13 13:49:47 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:871 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:49:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:49:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 878)
Nov 13 13:49:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:49:57 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[879]:	2013/11/13_13:49:57 DEBUG: pDummy monitor : 7
Nov 13 13:49:57 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:879 - exited with rc=7
Nov 13 13:49:57 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:879:stderr [ -- empty -- ]
Nov 13 13:49:57 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:879:stdout [ -- empty -- ]
Nov 13 13:49:57 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:879 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:50:07 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[890]:	2013/11/13_13:50:07 DEBUG: pDummy monitor : 7
Nov 13 13:50:07 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:890 - exited with rc=7
Nov 13 13:50:07 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:890:stderr [ -- empty -- ]
Nov 13 13:50:07 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:890:stdout [ -- empty -- ]
Nov 13 13:50:07 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:890 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:50:17 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[898]:	2013/11/13_13:50:17 DEBUG: pDummy monitor : 7
Nov 13 13:50:17 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:898 - exited with rc=7
Nov 13 13:50:17 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:898:stderr [ -- empty -- ]
Nov 13 13:50:17 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:898:stdout [ -- empty -- ]
Nov 13 13:50:17 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:898 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:50:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:50:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 905)
Nov 13 13:50:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:50:27 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[906]:	2013/11/13_13:50:27 DEBUG: pDummy monitor : 7
Nov 13 13:50:27 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:906 - exited with rc=7
Nov 13 13:50:27 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:906:stderr [ -- empty -- ]
Nov 13 13:50:27 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:906:stdout [ -- empty -- ]
Nov 13 13:50:27 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:906 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:50:37 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[920]:	2013/11/13_13:50:37 DEBUG: pDummy monitor : 7
Nov 13 13:50:37 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:920 - exited with rc=7
Nov 13 13:50:37 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:920:stderr [ -- empty -- ]
Nov 13 13:50:37 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:920:stdout [ -- empty -- ]
Nov 13 13:50:37 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:920 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:50:47 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[929]:	2013/11/13_13:50:47 DEBUG: pDummy monitor : 7
Nov 13 13:50:47 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:929 - exited with rc=7
Nov 13 13:50:47 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:929:stderr [ -- empty -- ]
Nov 13 13:50:47 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:929:stdout [ -- empty -- ]
Nov 13 13:50:47 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:929 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:50:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:50:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 936)
Nov 13 13:50:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:50:57 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[937]:	2013/11/13_13:50:57 DEBUG: pDummy monitor : 7
Nov 13 13:50:57 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:937 - exited with rc=7
Nov 13 13:50:57 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:937:stderr [ -- empty -- ]
Nov 13 13:50:57 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:937:stdout [ -- empty -- ]
Nov 13 13:50:57 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:937 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:51:07 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[945]:	2013/11/13_13:51:07 DEBUG: pDummy monitor : 7
Nov 13 13:51:07 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:945 - exited with rc=7
Nov 13 13:51:07 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:945:stderr [ -- empty -- ]
Nov 13 13:51:07 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:945:stdout [ -- empty -- ]
Nov 13 13:51:07 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:945 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:51:17 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[953]:	2013/11/13_13:51:17 DEBUG: pDummy monitor : 7
Nov 13 13:51:17 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:953 - exited with rc=7
Nov 13 13:51:17 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:953:stderr [ -- empty -- ]
Nov 13 13:51:17 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:953:stdout [ -- empty -- ]
Nov 13 13:51:17 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:953 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:51:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:51:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 961)
Nov 13 13:51:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:51:27 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[962]:	2013/11/13_13:51:27 DEBUG: pDummy monitor : 7
Nov 13 13:51:27 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:962 - exited with rc=7
Nov 13 13:51:27 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:962:stderr [ -- empty -- ]
Nov 13 13:51:27 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:962:stdout [ -- empty -- ]
Nov 13 13:51:27 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:962 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:51:37 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[976]:	2013/11/13_13:51:37 DEBUG: pDummy monitor : 7
Nov 13 13:51:37 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:976 - exited with rc=7
Nov 13 13:51:37 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:976:stderr [ -- empty -- ]
Nov 13 13:51:37 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:976:stdout [ -- empty -- ]
Nov 13 13:51:37 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:976 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:51:47 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[984]:	2013/11/13_13:51:47 DEBUG: pDummy monitor : 7
Nov 13 13:51:47 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:984 - exited with rc=7
Nov 13 13:51:47 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:984:stderr [ -- empty -- ]
Nov 13 13:51:47 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:984:stdout [ -- empty -- ]
Nov 13 13:51:47 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:984 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:51:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:51:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 991)
Nov 13 13:51:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:51:57 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[992]:	2013/11/13_13:51:58 DEBUG: pDummy monitor : 7
Nov 13 13:51:58 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:992 - exited with rc=7
Nov 13 13:51:58 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:992:stderr [ -- empty -- ]
Nov 13 13:51:58 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:992:stdout [ -- empty -- ]
Nov 13 13:51:58 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:992 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:52:08 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1001]:	2013/11/13_13:52:08 DEBUG: pDummy monitor : 7
Nov 13 13:52:08 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1001 - exited with rc=7
Nov 13 13:52:08 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1001:stderr [ -- empty -- ]
Nov 13 13:52:08 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1001:stdout [ -- empty -- ]
Nov 13 13:52:08 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1001 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:52:18 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1009]:	2013/11/13_13:52:18 DEBUG: pDummy monitor : 7
Nov 13 13:52:18 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1009 - exited with rc=7
Nov 13 13:52:18 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1009:stderr [ -- empty -- ]
Nov 13 13:52:18 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1009:stdout [ -- empty -- ]
Nov 13 13:52:18 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1009 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:52:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:52:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1018)
Nov 13 13:52:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:52:28 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1019]:	2013/11/13_13:52:28 DEBUG: pDummy monitor : 7
Nov 13 13:52:28 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1019 - exited with rc=7
Nov 13 13:52:28 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1019:stderr [ -- empty -- ]
Nov 13 13:52:28 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1019:stdout [ -- empty -- ]
Nov 13 13:52:28 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1019 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:52:38 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1034]:	2013/11/13_13:52:38 DEBUG: pDummy monitor : 7
Nov 13 13:52:38 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1034 - exited with rc=7
Nov 13 13:52:38 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1034:stderr [ -- empty -- ]
Nov 13 13:52:38 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1034:stdout [ -- empty -- ]
Nov 13 13:52:38 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1034 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:52:48 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1042]:	2013/11/13_13:52:48 DEBUG: pDummy monitor : 7
Nov 13 13:52:48 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1042 - exited with rc=7
Nov 13 13:52:48 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1042:stderr [ -- empty -- ]
Nov 13 13:52:48 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1042:stdout [ -- empty -- ]
Nov 13 13:52:48 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1042 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:52:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:52:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 2/108 1049)
Nov 13 13:52:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:52:58 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1050]:	2013/11/13_13:52:58 DEBUG: pDummy monitor : 7
Nov 13 13:52:58 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1050 - exited with rc=7
Nov 13 13:52:58 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1050:stderr [ -- empty -- ]
Nov 13 13:52:58 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1050:stdout [ -- empty -- ]
Nov 13 13:52:58 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1050 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:53:08 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1058]:	2013/11/13_13:53:08 DEBUG: pDummy monitor : 7
Nov 13 13:53:08 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1058 - exited with rc=7
Nov 13 13:53:08 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1058:stderr [ -- empty -- ]
Nov 13 13:53:08 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1058:stdout [ -- empty -- ]
Nov 13 13:53:08 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1058 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:53:18 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1069]:	2013/11/13_13:53:18 DEBUG: pDummy monitor : 7
Nov 13 13:53:18 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1069 - exited with rc=7
Nov 13 13:53:18 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1069:stderr [ -- empty -- ]
Nov 13 13:53:18 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1069:stdout [ -- empty -- ]
Nov 13 13:53:18 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1069 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:53:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:53:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1076)
Nov 13 13:53:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:53:28 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1077]:	2013/11/13_13:53:28 DEBUG: pDummy monitor : 7
Nov 13 13:53:28 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1077 - exited with rc=7
Nov 13 13:53:28 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1077:stderr [ -- empty -- ]
Nov 13 13:53:28 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1077:stdout [ -- empty -- ]
Nov 13 13:53:28 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1077 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:53:38 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1091]:	2013/11/13_13:53:38 DEBUG: pDummy monitor : 7
Nov 13 13:53:38 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1091 - exited with rc=7
Nov 13 13:53:38 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1091:stderr [ -- empty -- ]
Nov 13 13:53:38 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1091:stdout [ -- empty -- ]
Nov 13 13:53:38 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1091 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:53:48 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1099]:	2013/11/13_13:53:48 DEBUG: pDummy monitor : 7
Nov 13 13:53:48 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1099 - exited with rc=7
Nov 13 13:53:48 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1099:stderr [ -- empty -- ]
Nov 13 13:53:48 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1099:stdout [ -- empty -- ]
Nov 13 13:53:48 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1099 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:53:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:53:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1108)
Nov 13 13:53:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:53:58 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1109]:	2013/11/13_13:53:58 DEBUG: pDummy monitor : 7
Nov 13 13:53:58 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1109 - exited with rc=7
Nov 13 13:53:58 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1109:stderr [ -- empty -- ]
Nov 13 13:53:58 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1109:stdout [ -- empty -- ]
Nov 13 13:53:58 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1109 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:54:08 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1117]:	2013/11/13_13:54:08 DEBUG: pDummy monitor : 7
Nov 13 13:54:08 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1117 - exited with rc=7
Nov 13 13:54:08 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1117:stderr [ -- empty -- ]
Nov 13 13:54:08 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1117:stdout [ -- empty -- ]
Nov 13 13:54:08 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1117 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:54:18 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1125]:	2013/11/13_13:54:18 DEBUG: pDummy monitor : 7
Nov 13 13:54:18 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1125 - exited with rc=7
Nov 13 13:54:18 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1125:stderr [ -- empty -- ]
Nov 13 13:54:18 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1125:stdout [ -- empty -- ]
Nov 13 13:54:18 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1125 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:54:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:54:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1132)
Nov 13 13:54:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:54:28 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1133]:	2013/11/13_13:54:28 DEBUG: pDummy monitor : 7
Nov 13 13:54:28 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1133 - exited with rc=7
Nov 13 13:54:28 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1133:stderr [ -- empty -- ]
Nov 13 13:54:28 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1133:stdout [ -- empty -- ]
Nov 13 13:54:28 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1133 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:54:38 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1148]:	2013/11/13_13:54:38 DEBUG: pDummy monitor : 7
Nov 13 13:54:38 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1148 - exited with rc=7
Nov 13 13:54:38 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1148:stderr [ -- empty -- ]
Nov 13 13:54:38 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1148:stdout [ -- empty -- ]
Nov 13 13:54:38 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1148 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:54:48 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1156]:	2013/11/13_13:54:48 DEBUG: pDummy monitor : 7
Nov 13 13:54:48 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1156 - exited with rc=7
Nov 13 13:54:48 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1156:stderr [ -- empty -- ]
Nov 13 13:54:48 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1156:stdout [ -- empty -- ]
Nov 13 13:54:48 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1156 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:54:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:54:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 2/108 1163)
Nov 13 13:54:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:54:58 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1164]:	2013/11/13_13:54:58 DEBUG: pDummy monitor : 7
Nov 13 13:54:58 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1164 - exited with rc=7
Nov 13 13:54:58 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1164:stderr [ -- empty -- ]
Nov 13 13:54:58 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1164:stdout [ -- empty -- ]
Nov 13 13:54:58 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1164 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:55:08 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1173]:	2013/11/13_13:55:08 DEBUG: pDummy monitor : 7
Nov 13 13:55:08 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1173 - exited with rc=7
Nov 13 13:55:08 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1173:stderr [ -- empty -- ]
Nov 13 13:55:08 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1173:stdout [ -- empty -- ]
Nov 13 13:55:08 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1173 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:55:18 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1181]:	2013/11/13_13:55:18 DEBUG: pDummy monitor : 7
Nov 13 13:55:18 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1181 - exited with rc=7
Nov 13 13:55:18 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1181:stderr [ -- empty -- ]
Nov 13 13:55:18 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1181:stdout [ -- empty -- ]
Nov 13 13:55:18 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1181 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:55:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:55:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1188)
Nov 13 13:55:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:55:28 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1189]:	2013/11/13_13:55:28 DEBUG: pDummy monitor : 7
Nov 13 13:55:28 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1189 - exited with rc=7
Nov 13 13:55:28 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1189:stderr [ -- empty -- ]
Nov 13 13:55:28 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1189:stdout [ -- empty -- ]
Nov 13 13:55:28 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1189 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:55:38 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1203]:	2013/11/13_13:55:39 DEBUG: pDummy monitor : 7
Nov 13 13:55:39 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1203 - exited with rc=7
Nov 13 13:55:39 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1203:stderr [ -- empty -- ]
Nov 13 13:55:39 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1203:stdout [ -- empty -- ]
Nov 13 13:55:39 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1203 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:55:49 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1212]:	2013/11/13_13:55:49 DEBUG: pDummy monitor : 7
Nov 13 13:55:49 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1212 - exited with rc=7
Nov 13 13:55:49 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1212:stderr [ -- empty -- ]
Nov 13 13:55:49 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1212:stdout [ -- empty -- ]
Nov 13 13:55:49 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1212 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:55:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:55:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 2/108 1219)
Nov 13 13:55:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:55:59 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1220]:	2013/11/13_13:55:59 DEBUG: pDummy monitor : 7
Nov 13 13:55:59 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1220 - exited with rc=7
Nov 13 13:55:59 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1220:stderr [ -- empty -- ]
Nov 13 13:55:59 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1220:stdout [ -- empty -- ]
Nov 13 13:55:59 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1220 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:56:09 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1228]:	2013/11/13_13:56:09 DEBUG: pDummy monitor : 7
Nov 13 13:56:09 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1228 - exited with rc=7
Nov 13 13:56:09 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1228:stderr [ -- empty -- ]
Nov 13 13:56:09 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1228:stdout [ -- empty -- ]
Nov 13 13:56:09 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1228 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:56:19 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1236]:	2013/11/13_13:56:19 DEBUG: pDummy monitor : 7
Nov 13 13:56:19 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1236 - exited with rc=7
Nov 13 13:56:19 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1236:stderr [ -- empty -- ]
Nov 13 13:56:19 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1236:stdout [ -- empty -- ]
Nov 13 13:56:19 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1236 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:56:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:56:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1244)
Nov 13 13:56:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:56:29 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1245]:	2013/11/13_13:56:29 DEBUG: pDummy monitor : 7
Nov 13 13:56:29 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1245 - exited with rc=7
Nov 13 13:56:29 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1245:stderr [ -- empty -- ]
Nov 13 13:56:29 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1245:stdout [ -- empty -- ]
Nov 13 13:56:29 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1245 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:56:39 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1259]:	2013/11/13_13:56:39 DEBUG: pDummy monitor : 7
Nov 13 13:56:39 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1259 - exited with rc=7
Nov 13 13:56:39 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1259:stderr [ -- empty -- ]
Nov 13 13:56:39 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1259:stdout [ -- empty -- ]
Nov 13 13:56:39 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1259 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:56:49 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1267]:	2013/11/13_13:56:49 DEBUG: pDummy monitor : 7
Nov 13 13:56:49 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1267 - exited with rc=7
Nov 13 13:56:49 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1267:stderr [ -- empty -- ]
Nov 13 13:56:49 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1267:stdout [ -- empty -- ]
Nov 13 13:56:49 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1267 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:56:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:56:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 2/108 1274)
Nov 13 13:56:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:56:59 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1275]:	2013/11/13_13:56:59 DEBUG: pDummy monitor : 7
Nov 13 13:56:59 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1275 - exited with rc=7
Nov 13 13:56:59 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1275:stderr [ -- empty -- ]
Nov 13 13:56:59 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1275:stdout [ -- empty -- ]
Nov 13 13:56:59 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1275 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:57:09 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1284]:	2013/11/13_13:57:09 DEBUG: pDummy monitor : 7
Nov 13 13:57:09 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1284 - exited with rc=7
Nov 13 13:57:09 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1284:stderr [ -- empty -- ]
Nov 13 13:57:09 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1284:stdout [ -- empty -- ]
Nov 13 13:57:09 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1284 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:57:19 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1293]:	2013/11/13_13:57:19 DEBUG: pDummy monitor : 7
Nov 13 13:57:19 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1293 - exited with rc=7
Nov 13 13:57:19 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1293:stderr [ -- empty -- ]
Nov 13 13:57:19 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1293:stdout [ -- empty -- ]
Nov 13 13:57:19 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1293 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:57:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:57:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.080000 (full: 0.08 0.02 0.01 1/108 1300)
Nov 13 13:57:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:57:29 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1301]:	2013/11/13_13:57:29 DEBUG: pDummy monitor : 7
Nov 13 13:57:29 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1301 - exited with rc=7
Nov 13 13:57:29 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1301:stderr [ -- empty -- ]
Nov 13 13:57:29 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1301:stdout [ -- empty -- ]
Nov 13 13:57:29 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1301 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:57:39 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1316]:	2013/11/13_13:57:39 DEBUG: pDummy monitor : 7
Nov 13 13:57:39 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1316 - exited with rc=7
Nov 13 13:57:39 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1316:stderr [ -- empty -- ]
Nov 13 13:57:39 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1316:stdout [ -- empty -- ]
Nov 13 13:57:39 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1316 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:57:49 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1324]:	2013/11/13_13:57:49 DEBUG: pDummy monitor : 7
Nov 13 13:57:49 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1324 - exited with rc=7
Nov 13 13:57:49 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1324:stderr [ -- empty -- ]
Nov 13 13:57:49 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1324:stdout [ -- empty -- ]
Nov 13 13:57:49 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1324 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:57:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:57:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.050000 (full: 0.05 0.01 0.00 1/108 1331)
Nov 13 13:57:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:57:59 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1332]:	2013/11/13_13:57:59 DEBUG: pDummy monitor : 7
Nov 13 13:57:59 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1332 - exited with rc=7
Nov 13 13:57:59 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1332:stderr [ -- empty -- ]
Nov 13 13:57:59 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1332:stdout [ -- empty -- ]
Nov 13 13:57:59 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1332 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:58:09 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1340]:	2013/11/13_13:58:09 DEBUG: pDummy monitor : 7
Nov 13 13:58:09 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1340 - exited with rc=7
Nov 13 13:58:09 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1340:stderr [ -- empty -- ]
Nov 13 13:58:09 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1340:stdout [ -- empty -- ]
Nov 13 13:58:09 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1340 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:58:19 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1349]:	2013/11/13_13:58:19 DEBUG: pDummy monitor : 7
Nov 13 13:58:19 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1349 - exited with rc=7
Nov 13 13:58:19 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1349:stderr [ -- empty -- ]
Nov 13 13:58:19 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1349:stdout [ -- empty -- ]
Nov 13 13:58:19 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1349 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:58:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:58:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.030000 (full: 0.03 0.01 0.00 1/108 1356)
Nov 13 13:58:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:58:29 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1357]:	2013/11/13_13:58:29 DEBUG: pDummy monitor : 7
Nov 13 13:58:29 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1357 - exited with rc=7
Nov 13 13:58:29 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1357:stderr [ -- empty -- ]
Nov 13 13:58:29 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1357:stdout [ -- empty -- ]
Nov 13 13:58:29 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1357 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:58:39 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1371]:	2013/11/13_13:58:39 DEBUG: pDummy monitor : 7
Nov 13 13:58:39 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1371 - exited with rc=7
Nov 13 13:58:39 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1371:stderr [ -- empty -- ]
Nov 13 13:58:39 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1371:stdout [ -- empty -- ]
Nov 13 13:58:39 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1371 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:58:49 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1379]:	2013/11/13_13:58:49 DEBUG: pDummy monitor : 7
Nov 13 13:58:49 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1379 - exited with rc=7
Nov 13 13:58:49 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1379:stderr [ -- empty -- ]
Nov 13 13:58:49 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1379:stdout [ -- empty -- ]
Nov 13 13:58:49 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1379 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:58:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 13:58:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.020000 (full: 0.02 0.01 0.00 1/108 1387)
Nov 13 13:58:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:58:59 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1388]:	2013/11/13_13:58:59 DEBUG: pDummy monitor : 7
Nov 13 13:58:59 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1388 - exited with rc=7
Nov 13 13:58:59 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1388:stderr [ -- empty -- ]
Nov 13 13:58:59 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1388:stdout [ -- empty -- ]
Nov 13 13:58:59 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1388 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:59:09 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1396]:	2013/11/13_13:59:09 DEBUG: pDummy monitor : 7
Nov 13 13:59:09 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1396 - exited with rc=7
Nov 13 13:59:09 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1396:stderr [ -- empty -- ]
Nov 13 13:59:09 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1396:stdout [ -- empty -- ]
Nov 13 13:59:09 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1396 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:59:19 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1404]:	2013/11/13_13:59:20 DEBUG: pDummy monitor : 7
Nov 13 13:59:20 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1404 - exited with rc=7
Nov 13 13:59:20 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1404:stderr [ -- empty -- ]
Nov 13 13:59:20 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1404:stdout [ -- empty -- ]
Nov 13 13:59:20 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1404 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:59:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:59:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.010000 (full: 0.01 0.00 0.00 1/108 1411)
Nov 13 13:59:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 13:59:30 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1412]:	2013/11/13_13:59:30 DEBUG: pDummy monitor : 7
Nov 13 13:59:30 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1412 - exited with rc=7
Nov 13 13:59:30 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1412:stderr [ -- empty -- ]
Nov 13 13:59:30 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1412:stdout [ -- empty -- ]
Nov 13 13:59:30 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1412 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:59:40 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1432]:	2013/11/13_13:59:40 DEBUG: pDummy monitor : 7
Nov 13 13:59:40 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1432 - exited with rc=7
Nov 13 13:59:40 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1432:stderr [ -- empty -- ]
Nov 13 13:59:40 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1432:stdout [ -- empty -- ]
Nov 13 13:59:40 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1432 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:59:50 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1440]:	2013/11/13_13:59:50 DEBUG: pDummy monitor : 7
Nov 13 13:59:50 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1440 - exited with rc=7
Nov 13 13:59:50 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1440:stderr [ -- empty -- ]
Nov 13 13:59:50 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1440:stdout [ -- empty -- ]
Nov 13 13:59:50 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1440 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 13:59:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 13:59:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 2/108 1447)
Nov 13 13:59:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:00:00 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1448]:	2013/11/13_14:00:00 DEBUG: pDummy monitor : 7
Nov 13 14:00:00 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1448 - exited with rc=7
Nov 13 14:00:00 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1448:stderr [ -- empty -- ]
Nov 13 14:00:00 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1448:stdout [ -- empty -- ]
Nov 13 14:00:00 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1448 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:00:10 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1459]:	2013/11/13_14:00:10 DEBUG: pDummy monitor : 7
Nov 13 14:00:10 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1459 - exited with rc=7
Nov 13 14:00:10 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1459:stderr [ -- empty -- ]
Nov 13 14:00:10 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1459:stdout [ -- empty -- ]
Nov 13 14:00:10 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1459 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:00:20 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1467]:	2013/11/13_14:00:20 DEBUG: pDummy monitor : 7
Nov 13 14:00:20 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1467 - exited with rc=7
Nov 13 14:00:20 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1467:stderr [ -- empty -- ]
Nov 13 14:00:20 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1467:stdout [ -- empty -- ]
Nov 13 14:00:20 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1467 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:00:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:00:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1475)
Nov 13 14:00:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:00:30 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1476]:	2013/11/13_14:00:30 DEBUG: pDummy monitor : 7
Nov 13 14:00:30 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1476 - exited with rc=7
Nov 13 14:00:30 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1476:stderr [ -- empty -- ]
Nov 13 14:00:30 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1476:stdout [ -- empty -- ]
Nov 13 14:00:30 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1476 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:00:40 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1491]:	2013/11/13_14:00:40 DEBUG: pDummy monitor : 7
Nov 13 14:00:40 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1491 - exited with rc=7
Nov 13 14:00:40 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1491:stderr [ -- empty -- ]
Nov 13 14:00:40 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1491:stdout [ -- empty -- ]
Nov 13 14:00:40 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1491 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:00:50 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1500]:	2013/11/13_14:00:50 DEBUG: pDummy monitor : 7
Nov 13 14:00:50 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1500 - exited with rc=7
Nov 13 14:00:50 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1500:stderr [ -- empty -- ]
Nov 13 14:00:50 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1500:stdout [ -- empty -- ]
Nov 13 14:00:50 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1500 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:00:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:00:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1508)
Nov 13 14:00:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:01:00 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1509]:	2013/11/13_14:01:00 DEBUG: pDummy monitor : 7
Nov 13 14:01:00 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1509 - exited with rc=7
Nov 13 14:01:00 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1509:stderr [ -- empty -- ]
Nov 13 14:01:00 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1509:stdout [ -- empty -- ]
Nov 13 14:01:00 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1509 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:01:10 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1528]:	2013/11/13_14:01:10 DEBUG: pDummy monitor : 7
Nov 13 14:01:10 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1528 - exited with rc=7
Nov 13 14:01:10 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1528:stderr [ -- empty -- ]
Nov 13 14:01:10 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1528:stdout [ -- empty -- ]
Nov 13 14:01:10 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1528 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:01:20 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1537]:	2013/11/13_14:01:20 DEBUG: pDummy monitor : 7
Nov 13 14:01:20 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1537 - exited with rc=7
Nov 13 14:01:20 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1537:stderr [ -- empty -- ]
Nov 13 14:01:20 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1537:stdout [ -- empty -- ]
Nov 13 14:01:20 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1537 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:01:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:01:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1546)
Nov 13 14:01:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:01:30 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1547]:	2013/11/13_14:01:30 DEBUG: pDummy monitor : 7
Nov 13 14:01:30 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1547 - exited with rc=7
Nov 13 14:01:30 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1547:stderr [ -- empty -- ]
Nov 13 14:01:30 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1547:stdout [ -- empty -- ]
Nov 13 14:01:30 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1547 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:01:40 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1562]:	2013/11/13_14:01:40 DEBUG: pDummy monitor : 7
Nov 13 14:01:40 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1562 - exited with rc=7
Nov 13 14:01:40 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1562:stderr [ -- empty -- ]
Nov 13 14:01:40 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1562:stdout [ -- empty -- ]
Nov 13 14:01:40 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1562 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:01:50 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1570]:	2013/11/13_14:01:50 DEBUG: pDummy monitor : 7
Nov 13 14:01:50 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1570 - exited with rc=7
Nov 13 14:01:50 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1570:stderr [ -- empty -- ]
Nov 13 14:01:50 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1570:stdout [ -- empty -- ]
Nov 13 14:01:50 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1570 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:01:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 1 ticks in 30s is 0.000333 (@100 tps)
Nov 13 14:01:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1577)
Nov 13 14:01:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:02:00 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1578]:	2013/11/13_14:02:00 DEBUG: pDummy monitor : 7
Nov 13 14:02:00 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1578 - exited with rc=7
Nov 13 14:02:00 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1578:stderr [ -- empty -- ]
Nov 13 14:02:00 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1578:stdout [ -- empty -- ]
Nov 13 14:02:00 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1578 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:02:10 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1588]:	2013/11/13_14:02:10 DEBUG: pDummy monitor : 7
Nov 13 14:02:10 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1588 - exited with rc=7
Nov 13 14:02:10 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1588:stderr [ -- empty -- ]
Nov 13 14:02:10 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1588:stdout [ -- empty -- ]
Nov 13 14:02:10 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1588 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:02:20 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1598]:	2013/11/13_14:02:20 DEBUG: pDummy monitor : 7
Nov 13 14:02:20 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1598 - exited with rc=7
Nov 13 14:02:20 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1598:stderr [ -- empty -- ]
Nov 13 14:02:20 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1598:stdout [ -- empty -- ]
Nov 13 14:02:20 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1598 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:02:22 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:02:22 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1605)
Nov 13 14:02:22 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:02:30 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1606]:	2013/11/13_14:02:30 DEBUG: pDummy monitor : 7
Nov 13 14:02:30 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1606 - exited with rc=7
Nov 13 14:02:30 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1606:stderr [ -- empty -- ]
Nov 13 14:02:30 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1606:stdout [ -- empty -- ]
Nov 13 14:02:30 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1606 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:02:40 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1621]:	2013/11/13_14:02:40 DEBUG: pDummy monitor : 7
Nov 13 14:02:40 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1621 - exited with rc=7
Nov 13 14:02:40 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1621:stderr [ -- empty -- ]
Nov 13 14:02:40 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1621:stdout [ -- empty -- ]
Nov 13 14:02:40 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1621 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:02:50 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1629]:	2013/11/13_14:02:50 DEBUG: pDummy monitor : 7
Nov 13 14:02:50 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1629 - exited with rc=7
Nov 13 14:02:50 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1629:stderr [ -- empty -- ]
Nov 13 14:02:50 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1629:stdout [ -- empty -- ]
Nov 13 14:02:50 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1629 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:02:52 [469] vm3       crmd: (  throttle.c:256   )   debug: throttle_cib_load: 	Current CIB load from 0 ticks in 30s is 0.000000 (@100 tps)
Nov 13 14:02:52 [469] vm3       crmd: (  throttle.c:298   )   debug: throttle_load_avg: 	Current load is 0.000000 (full: 0.00 0.00 0.00 1/108 1637)
Nov 13 14:02:52 [469] vm3       crmd: (  throttle.c:378   )   debug: throttle_io_load: 	Current IO load is 0.000000
Nov 13 14:03:00 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1638]:	2013/11/13_14:03:01 DEBUG: pDummy monitor : 7
Nov 13 14:03:01 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1638 - exited with rc=7
Nov 13 14:03:01 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1638:stderr [ -- empty -- ]
Nov 13 14:03:01 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1638:stdout [ -- empty -- ]
Nov 13 14:03:01 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1638 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:03:04 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_op="st_query" st_callid="13" st_callopt="0" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 14:03:04 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Nov 13 14:03:04 [465] vm3 stonith-ng: (    remote.c:593   )   trace: create_remote_stonith_op: 	Created 893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b
Nov 13 14:03:04 [465] vm3 stonith-ng: (    remote.c:623   )   trace: create_remote_stonith_op: 	Recorded new stonith op: 893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b - reboot of vm3 for crmd.15883
Nov 13 14:03:04 [465] vm3 stonith-ng: (    remote.c:474   )   trace: merge_duplicates: 	Must be for different clients: crmd.15883
Nov 13 14:03:04 [465] vm3 stonith-ng: (  commands.c:1296  )   debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_op="st_query" st_callid="13" st_callopt="0" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 14:03:04 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 14:03:04 [465] vm3 stonith-ng: (  commands.c:1254  )   debug: stonith_query_capable_device_cb: 	Found 0 matching devices for 'vm3'
Nov 13 14:03:04 [465] vm3 stonith-ng: (  commands.c:1260  )   trace: stonith_query_capable_device_cb: 	Attaching query list output
Nov 13 14:03:04 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_query from vm1: OK (0)
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   <st-reply t="st_notify" subt="broadcast" st_op="st_notify" count="12" src="vm1">
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     <st_calldata>
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]       <st_notify_fence state="4" st_rc="-201" st_target="vm3" st_device_action="reboot" st_delegate="vm1" st_remote_op="893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b" st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_clientname="crmd.15883"/>
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]     </st_calldata>
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:169   )   trace: stonith_peer_callback: 	Peer[inbound]   </st-reply>
Nov 13 14:03:07 [465] vm3 stonith-ng: (  commands.c:2049  )   debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Nov 13 14:03:07 [465] vm3 stonith-ng: (    remote.c:1281  )   debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.15883@893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b.vm1: Generic Pacemaker error (-201)
Nov 13 14:03:07 [465] vm3 stonith-ng: (    remote.c:297   )  notice: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.15883@vm1.893bcd8c: Generic Pacemaker error
Nov 13 14:03:07 [465] vm3 stonith-ng: (  commands.c:1650  )   trace: stonith_construct_reply: 	Creating a basic reply
Nov 13 14:03:07 [465] vm3 stonith-ng: (  commands.c:1666  )   trace: stonith_construct_reply: 	Attaching reply output
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:241   )   trace: do_local_reply: 	Sending response
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:244   )   trace: do_local_reply: 	Sending callback to request originator
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:247   )   trace: do_local_reply: 	No client to sent the response to.  F_STONITH_CLIENTID not set.
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:369   )   trace: do_stonith_notify: 	Notifying clients
Nov 13 14:03:07 [469] vm3       crmd: (  te_utils.c:167   )  notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b) by client crmd.15883
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:318   )   trace: stonith_notify_client: 	Sent st_notify_fence notification to client crmd.469.f2a5be
Nov 13 14:03:07 [465] vm3 stonith-ng: (      main.c:372   )   trace: do_stonith_notify: 	Notify complete
Nov 13 14:03:07 [465] vm3 stonith-ng: (  commands.c:2063  )    info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Nov 13 14:03:11 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
Dummy(pDummy)[1647]:	2013/11/13_14:03:11 DEBUG: pDummy monitor : 7
Nov 13 14:03:11 [466] vm3       lrmd: (services_lin:301   )   debug: operation_finished: 	pDummy_monitor_10000:1647 - exited with rc=7
Nov 13 14:03:11 [466] vm3       lrmd: (services_lin:306   )   debug: operation_finished: 	pDummy_monitor_10000:1647:stderr [ -- empty -- ]
Nov 13 14:03:11 [466] vm3       lrmd: (services_lin:310   )   debug: operation_finished: 	pDummy_monitor_10000:1647:stdout [ -- empty -- ]
Nov 13 14:03:11 [466] vm3       lrmd: (      lrmd.c:104   )   debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:11 pid:1647 exit-code:7 exec-time:0ms queue-time:0ms
Nov 13 14:03:21 [466] vm3       lrmd: (services_lin:217   )   debug: recurring_action_timer: 	Scheduling another invokation of pDummy_monitor_10000
