Nov 13 13:44:17 vm3 corosync[450]:   [MAIN  ] main.c:main:1171 Corosync Cluster Engine ('2.3.2.7-a911'): started and ready to provide service.
Nov 13 13:44:17 vm3 corosync[450]:   [MAIN  ] main.c:main:1172 Corosync built-in features: watchdog upstart snmp pie relro bindnow
Nov 13 13:44:17 vm3 corosync[451]:   [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Nov 13 13:44:17 vm3 corosync[451]:   [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Nov 13 13:44:17 vm3 corosync[451]:   [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Nov 13 13:44:17 vm3 corosync[451]:   [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Nov 13 13:44:18 vm3 corosync[451]:   [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.101.143] is now up.
Nov 13 13:44:18 vm3 corosync[451]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration map access [0]
Nov 13 13:44:18 vm3 corosync[451]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cmap
Nov 13 13:44:18 vm3 corosync[451]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration service [1]
Nov 13 13:44:18 vm3 corosync[451]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cfg
Nov 13 13:44:18 vm3 corosync[451]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster closed process group service v1.01 [2]
Nov 13 13:44:18 vm3 corosync[451]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cpg
Nov 13 13:44:18 vm3 corosync[451]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync profile loading service [4]
Nov 13 13:44:18 vm3 corosync[451]:   [WD    ] wd.c:setup_watchdog:631 No Watchdog, try modprobe <a watchdog>
Nov 13 13:44:18 vm3 corosync[451]:   [WD    ] wd.c:wd_scan_resources:580 no resources configured.
Nov 13 13:44:18 vm3 corosync[451]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync watchdog service [7]
Nov 13 13:44:18 vm3 corosync[451]:   [QUORUM] vsf_quorum.c:quorum_exec_init_fn:274 Using quorum provider corosync_votequorum
Nov 13 13:44:18 vm3 corosync[451]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync vote quorum service v1.0 [5]
Nov 13 13:44:18 vm3 corosync[451]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: votequorum
Nov 13 13:44:18 vm3 corosync[451]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster quorum service v0.1 [3]
Nov 13 13:44:18 vm3 corosync[451]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: quorum
Nov 13 13:44:18 vm3 corosync[451]:   [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.102.143] is now up.
Nov 13 13:44:18 vm3 corosync[451]:   [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.143:4) was formed. Members joined: -1062705777
Nov 13 13:44:18 vm3 corosync[451]:   [QUORUM] vsf_quorum.c:log_view_list:132 Members[1]: -1062705777
Nov 13 13:44:18 vm3 corosync[451]:   [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Nov 13 13:44:18 vm3 corosync[451]:   [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.141:12) was formed. Members joined: -1062705779 -1062705778
Nov 13 13:44:18 vm3 corosync[451]:   [QUORUM] vsf_quorum.c:quorum_api_set_quorum:148 This node is within the primary component and will provide service.
Nov 13 13:44:18 vm3 corosync[451]:   [QUORUM] vsf_quorum.c:log_view_list:132 Members[3]: -1062705779 -1062705778 -1062705777
Nov 13 13:44:18 vm3 corosync[451]:   [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Nov 13 13:44:20 vm3 pacemakerd[460]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Nov 13 13:44:20 vm3 pacemakerd[460]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=remote.c,commands.c,main.c, functions=(null), formats=(null), tags=(null)
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: crm_ipc_connect: Could not establish pacemakerd connection: Connection refused (111)
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: get_cluster_type: Detected an active 'corosync' cluster
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: mcp_read_config: Reading configure for stack: corosync
Nov 13 13:44:20 vm3 pacemakerd[460]:   notice: mcp_read_config: Configured corosync to accept connections from group 492: OK (1)
Nov 13 13:44:20 vm3 pacemakerd[460]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Nov 13 13:44:20 vm3 pacemakerd[460]:   notice: main: Starting Pacemaker 1.1.10 (Build: 2383f6c):  ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: main: Maximum core file size is: 18446744073709551615
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: qb_ipcs_us_publish: server name: pacemakerd
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: crm_get_peer: Created entry 786452f5-9344-412f-9259-eaccbe3445f1/0x8e9030 for node (null)/3232261519 (1 total)
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: crm_get_peer: Node 3232261519 has uuid 3232261519
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:20 vm3 pacemakerd[460]:   notice: cluster_connect_quorum: Quorum acquired
Nov 13 13:44:20 vm3 pacemakerd[460]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Nov 13 13:44:20 vm3 pacemakerd[460]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: crm_get_peer: Node 3232261519 is now known as vm3
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: start_child: Using uid=496 and group=492 for process cib
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: start_child: Forked child 464 for process cib
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: start_child: Forked child 465 for process stonith-ng
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: start_child: Forked child 466 for process lrmd
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: start_child: Using uid=496 and group=492 for process attrd
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: start_child: Forked child 467 for process attrd
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: start_child: Using uid=496 and group=492 for process pengine
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: start_child: Forked child 468 for process pengine
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: start_child: Using uid=496 and group=492 for process crmd
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: start_child: Forked child 469 for process crmd
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: main: Starting mainloop
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: pcmk_quorum_notification: Membership 12: quorum retained (3)
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: crm_get_peer: Created entry 3f9e83b4-157d-444a-b6c9-aaad497b7f23/0x9eb3d0 for node (null)/3232261517 (2 total)
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: crm_get_peer: Node 3232261517 has uuid 3232261517
Nov 13 13:44:20 vm3 pacemakerd[460]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261517
Nov 13 13:44:20 vm3 cib[464]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Nov 13 13:44:20 vm3 cib[464]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=remote.c,commands.c,main.c, functions=(null), formats=(null), tags=(null)
Nov 13 13:44:20 vm3 cib[464]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 vm3 cib[464]:   notice: main: Using new config location: /var/lib/pacemaker/cib
Nov 13 13:44:20 vm3 attrd[467]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Nov 13 13:44:20 vm3 attrd[467]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=remote.c,commands.c,main.c, functions=(null), formats=(null), tags=(null)
Nov 13 13:44:20 vm3 lrmd[466]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Nov 13 13:44:20 vm3 lrmd[466]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=remote.c,commands.c,main.c, functions=(null), formats=(null), tags=(null)
Nov 13 13:44:20 vm3 lrmd[466]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 vm3 lrmd[466]:     info: qb_ipcs_us_publish: server name: lrmd
Nov 13 13:44:20 vm3 stonith-ng[465]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Nov 13 13:44:20 vm3 stonith-ng[465]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=remote.c,commands.c,main.c, functions=(null), formats=(null), tags=(null)
Nov 13 13:44:20 vm3 attrd[467]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:20 vm3 cib[464]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Nov 13 13:44:20 vm3 cib[464]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Nov 13 13:44:20 vm3 cib[464]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
Nov 13 13:44:20 vm3 cib[464]:  warning: retrieveCib: Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml
Nov 13 13:44:20 vm3 cib[464]:  warning: readCibXmlFile: Primary configuration corrupt or unusable, trying backups in /var/lib/pacemaker/cib
Nov 13 13:44:20 vm3 cib[464]:  warning: readCibXmlFile: Continuing with an empty configuration.
Nov 13 13:44:20 vm3 cib[464]:     info: validate_with_relaxng: Creating RNG parser context
Nov 13 13:44:20 vm3 lrmd[466]:     info: main: Starting
Nov 13 13:44:20 vm3 stonith-ng[465]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Nov 13 13:44:20 vm3 stonith-ng[465]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Nov 13 13:44:20 vm3 attrd[467]:     info: main: Starting up
Nov 13 13:44:20 vm3 attrd[467]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Nov 13 13:44:20 vm3 attrd[467]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Nov 13 13:44:20 vm3 attrd[467]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 13 13:44:20 vm3 stonith-ng[465]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Nov 13 13:44:20 vm3 stonith-ng[465]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 13 13:44:21 vm3 pengine[468]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Nov 13 13:44:21 vm3 pengine[468]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=remote.c,commands.c,main.c, functions=(null), formats=(null), tags=(null)
Nov 13 13:44:21 vm3 crmd[469]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Nov 13 13:44:21 vm3 crmd[469]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=remote.c,commands.c,main.c, functions=(null), formats=(null), tags=(null)
Nov 13 13:44:21 vm3 crmd[469]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:21 vm3 pengine[468]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Nov 13 13:44:21 vm3 pengine[468]:     info: qb_ipcs_us_publish: server name: pengine
Nov 13 13:44:21 vm3 pengine[468]:     info: main: Starting pengine
Nov 13 13:44:21 vm3 crmd[469]:   notice: main: CRM Git Version: 2383f6c
Nov 13 13:44:21 vm3 crmd[469]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Nov 13 13:44:21 vm3 crmd[469]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Nov 13 13:44:21 vm3 crmd[469]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Nov 13 13:44:21 vm3 crmd[469]:     info: crm_ipc_connect: Could not establish cib_shm connection: Connection refused (111)
Nov 13 13:44:21 vm3 pacemakerd[460]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261517
Nov 13 13:44:21 vm3 pacemakerd[460]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:21 vm3 pacemakerd[460]:     info: crm_get_peer: Created entry 18628256-f88a-4204-8634-d1c2f5f976b2/0x9ea7d0 for node (null)/3232261518 (3 total)
Nov 13 13:44:21 vm3 pacemakerd[460]:     info: crm_get_peer: Node 3232261518 has uuid 3232261518
Nov 13 13:44:21 vm3 pacemakerd[460]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261518
Nov 13 13:44:21 vm3 attrd[467]:     info: crm_get_peer: Created entry 00b9766b-a28b-4016-9e84-6ff891d8164a/0x1cce140 for node (null)/3232261519 (1 total)
Nov 13 13:44:21 vm3 attrd[467]:     info: crm_get_peer: Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 vm3 attrd[467]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 vm3 attrd[467]:   notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232261519] - state is now member (was (null))
Nov 13 13:44:21 vm3 attrd[467]:     info: init_cs_connection_once: Connection to 'corosync': established
Nov 13 13:44:21 vm3 cib[464]:     info: startCib: CIB Initialization completed successfully
Nov 13 13:44:21 vm3 cib[464]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 13 13:44:21 vm3 pacemakerd[460]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Nov 13 13:44:21 vm3 pacemakerd[460]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:21 vm3 pacemakerd[460]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm3[3232261519] - state is now member (was (null))
Nov 13 13:44:21 vm3 pacemakerd[460]:     info: crm_get_peer: Node 3232261517 is now known as vm1
Nov 13 13:44:21 vm3 pacemakerd[460]:     info: crm_get_peer: Node 3232261518 is now known as vm2
Nov 13 13:44:21 vm3 stonith-ng[465]:     info: crm_get_peer: Created entry a67a7722-f78f-46cf-b8b4-b989353591b2/0x1b006d0 for node (null)/3232261519 (1 total)
Nov 13 13:44:21 vm3 stonith-ng[465]:     info: crm_get_peer: Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 vm3 stonith-ng[465]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 vm3 stonith-ng[465]:     info: init_cs_connection_once: Connection to 'corosync': established
Nov 13 13:44:21 vm3 attrd[467]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Nov 13 13:44:21 vm3 attrd[467]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 13 13:44:21 vm3 attrd[467]:     info: crm_get_peer: Node 3232261519 is now known as vm3
Nov 13 13:44:21 vm3 attrd[467]:     info: main: Cluster connection active
Nov 13 13:44:21 vm3 attrd[467]:     info: qb_ipcs_us_publish: server name: attrd
Nov 13 13:44:21 vm3 attrd[467]:     info: main: Accepting attribute updates
Nov 13 13:44:21 vm3 attrd[467]:     info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
Nov 13 13:44:21 vm3 cib[464]:     info: crm_get_peer: Created entry 3d5cecc8-ba89-4ad7-8053-78b2ef795126/0x1a6c360 for node (null)/3232261519 (1 total)
Nov 13 13:44:21 vm3 cib[464]:     info: crm_get_peer: Node 3232261519 has uuid 3232261519
Nov 13 13:44:21 vm3 cib[464]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:21 vm3 cib[464]:     info: init_cs_connection_once: Connection to 'corosync': established
Nov 13 13:44:21 vm3 stonith-ng[465]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Nov 13 13:44:21 vm3 stonith-ng[465]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 13 13:44:21 vm3 stonith-ng[465]:     info: crm_get_peer: Node 3232261519 is now known as vm3
Nov 13 13:44:21 vm3 stonith-ng[465]:     info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
Nov 13 13:44:21 vm3 cib[464]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Nov 13 13:44:21 vm3 cib[464]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 13 13:44:21 vm3 cib[464]:     info: crm_get_peer: Node 3232261519 is now known as vm3
Nov 13 13:44:21 vm3 cib[464]:     info: qb_ipcs_us_publish: server name: cib_ro
Nov 13 13:44:21 vm3 cib[464]:     info: qb_ipcs_us_publish: server name: cib_rw
Nov 13 13:44:21 vm3 cib[464]:     info: qb_ipcs_us_publish: server name: cib_shm
Nov 13 13:44:21 vm3 cib[464]:     info: cib_init: Starting cib mainloop
Nov 13 13:44:21 vm3 cib[464]:     info: pcmk_cpg_membership: Joined[0.0] cib.3232261519 
Nov 13 13:44:21 vm3 cib[464]:     info: crm_get_peer: Created entry da0b5f3f-8aeb-43e5-a5b8-2adea07b94d5/0x1a6ebc0 for node (null)/3232261517 (2 total)
Nov 13 13:44:21 vm3 cib[464]:     info: crm_get_peer: Node 3232261517 has uuid 3232261517
Nov 13 13:44:21 vm3 cib[464]:     info: pcmk_cpg_membership: Member[0.0] cib.3232261517 
Nov 13 13:44:21 vm3 cib[464]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:21 vm3 cib[464]:     info: crm_get_peer: Created entry be1aae7f-b01f-45a8-9001-7da35423f919/0x1a6def0 for node (null)/3232261518 (3 total)
Nov 13 13:44:21 vm3 cib[464]:     info: crm_get_peer: Node 3232261518 has uuid 3232261518
Nov 13 13:44:21 vm3 cib[464]:     info: pcmk_cpg_membership: Member[0.1] cib.3232261518 
Nov 13 13:44:21 vm3 cib[464]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:21 vm3 cib[464]:     info: pcmk_cpg_membership: Member[0.2] cib.3232261519 
Nov 13 13:44:21 vm3 cib[470]:     info: write_cib_contents: Wrote version 0.0.0 of the CIB to disk (digest: 5a2fda2a744a4dcae8dfd552c5909754)
Nov 13 13:44:21 vm3 cib[470]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.bTQsPv (digest: /var/lib/pacemaker/cib/cib.ZGUMyJ)
Nov 13 13:44:22 vm3 cib[464]:     info: crm_client_new: Connecting 0x1a6f340 for uid=496 gid=492 pid=469 id=4c21d8b3-8500-4e75-a428-bf457cc36a6c
Nov 13 13:44:22 vm3 crmd[469]:     info: do_cib_control: CIB connection established
Nov 13 13:44:22 vm3 crmd[469]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 13 13:44:22 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Nov 13 13:44:22 vm3 crmd[469]:     info: crm_get_peer: Created entry 1b84e6c1-a836-47c9-bdcb-98c835f079db/0x275fc60 for node (null)/3232261519 (1 total)
Nov 13 13:44:22 vm3 crmd[469]:     info: crm_get_peer: Node 3232261519 has uuid 3232261519
Nov 13 13:44:22 vm3 crmd[469]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261519] - corosync-cpg is now online
Nov 13 13:44:22 vm3 crmd[469]:     info: init_cs_connection_once: Connection to 'corosync': established
Nov 13 13:44:22 vm3 crmd[469]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Nov 13 13:44:22 vm3 crmd[469]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 13 13:44:22 vm3 crmd[469]:     info: crm_get_peer: Node 3232261519 is now known as vm3
Nov 13 13:44:22 vm3 crmd[469]:     info: peer_update_callback: vm3 is now (null)
Nov 13 13:44:22 vm3 crmd[469]:   notice: cluster_connect_quorum: Quorum acquired
Nov 13 13:44:22 vm3 cib[464]:     info: crm_client_new: Connecting 0x1af22b0 for uid=496 gid=492 pid=467 id=15b61ae7-62d1-4555-b4ac-84143ce0eda3
Nov 13 13:44:22 vm3 attrd[467]:     info: attrd_cib_connect: Connected to the CIB after 2 attempts
Nov 13 13:44:22 vm3 cib[464]:     info: crm_client_new: Connecting 0x1af3640 for uid=0 gid=0 pid=465 id=3b3586a2-45ff-4ba6-8242-af9114a2a8e7
Nov 13 13:44:22 vm3 attrd[467]:     info: main: CIB connection active
Nov 13 13:44:22 vm3 attrd[467]:     info: pcmk_cpg_membership: Joined[0.0] attrd.3232261519 
Nov 13 13:44:22 vm3 attrd[467]:     info: crm_get_peer: Created entry aeb1dc66-e6a9-4bb6-85bf-03cffd5517e5/0x1cd3f10 for node (null)/3232261517 (2 total)
Nov 13 13:44:22 vm3 attrd[467]:     info: crm_get_peer: Node 3232261517 has uuid 3232261517
Nov 13 13:44:22 vm3 attrd[467]:     info: pcmk_cpg_membership: Member[0.0] attrd.3232261517 
Nov 13 13:44:22 vm3 attrd[467]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:22 vm3 attrd[467]:   notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:22 vm3 attrd[467]:     info: crm_get_peer: Created entry bcf3b23c-c185-4ca8-90aa-68b1949e44ea/0x1cd1fa0 for node (null)/3232261518 (3 total)
Nov 13 13:44:22 vm3 attrd[467]:     info: crm_get_peer: Node 3232261518 has uuid 3232261518
Nov 13 13:44:22 vm3 attrd[467]:     info: pcmk_cpg_membership: Member[0.1] attrd.3232261518 
Nov 13 13:44:22 vm3 attrd[467]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:22 vm3 attrd[467]:   notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:22 vm3 attrd[467]:     info: pcmk_cpg_membership: Member[0.2] attrd.3232261519 
Nov 13 13:44:22 vm3 stonith-ng[465]:   notice: setup_cib: Watching for stonith topology changes
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: qb_ipcs_us_publish: server name: stonith-ng
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: main: Starting stonith-ng mainloop
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: pcmk_cpg_membership: Joined[0.0] stonith-ng.3232261519 
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: crm_get_peer: Created entry 13cad9fd-c8f5-4e86-9b27-1cbcc9db7a6f/0x1b05680 for node (null)/3232261517 (2 total)
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: crm_get_peer: Node 3232261517 has uuid 3232261517
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: pcmk_cpg_membership: Member[0.0] stonith-ng.3232261517 
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:22 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Nov 13 13:44:22 vm3 crmd[469]:     info: do_ha_control: Connected to the cluster
Nov 13 13:44:22 vm3 crmd[469]:     info: lrmd_ipc_connect: Connecting to lrmd
Nov 13 13:44:22 vm3 cib[464]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.0.0)
Nov 13 13:44:22 vm3 lrmd[466]:     info: crm_client_new: Connecting 0xfcac10 for uid=496 gid=492 pid=469 id=d1a560c5-bbbd-475e-a3cb-6fca0227063d
Nov 13 13:44:22 vm3 crmd[469]:     info: do_lrm_control: LRM connection established
Nov 13 13:44:22 vm3 crmd[469]:     info: do_started: Delaying start, no membership data (0000000000100000)
Nov 13 13:44:22 vm3 crmd[469]:     info: pcmk_quorum_notification: Membership 12: quorum retained (3)
Nov 13 13:44:22 vm3 crmd[469]:     info: crm_get_peer: Created entry a2f5c386-a159-43d7-a313-c73e5ec261c8/0x28a6dc0 for node (null)/3232261517 (2 total)
Nov 13 13:44:22 vm3 crmd[469]:     info: crm_get_peer: Node 3232261517 has uuid 3232261517
Nov 13 13:44:22 vm3 crmd[469]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261517
Nov 13 13:44:22 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.0.0)
Nov 13 13:44:22 vm3 stonith-ng[465]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Nov 13 13:44:22 vm3 stonith-ng[465]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: crm_get_peer: Created entry 09b9096c-6acc-48a3-9a8b-ae3b3f5f00da/0x1b041c0 for node (null)/3232261518 (3 total)
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: crm_get_peer: Node 3232261518 has uuid 3232261518
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: pcmk_cpg_membership: Member[0.1] stonith-ng.3232261518 
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: pcmk_cpg_membership: Member[0.2] stonith-ng.3232261519 
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: init_cib_cache_cb: Updating device list from the cib: init
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: unpack_nodes: Creating a fake local node
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: crm_get_peer: Node 3232261517 is now known as vm1
Nov 13 13:44:22 vm3 stonith-ng[465]:     info: crm_get_peer: Node 3232261518 is now known as vm2
Nov 13 13:44:22 vm3 crmd[469]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261517
Nov 13 13:44:22 vm3 crmd[469]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261517] - state is now member (was (null))
Nov 13 13:44:22 vm3 crmd[469]:     info: crm_get_peer: Created entry de18ede3-e03b-4761-b647-647f7cbe6228/0x28a48f0 for node (null)/3232261518 (3 total)
Nov 13 13:44:22 vm3 crmd[469]:     info: crm_get_peer: Node 3232261518 has uuid 3232261518
Nov 13 13:44:22 vm3 crmd[469]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261518
Nov 13 13:44:22 vm3 crmd[469]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Nov 13 13:44:22 vm3 crmd[469]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Nov 13 13:44:22 vm3 crmd[469]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm3[3232261519] - state is now member (was (null))
Nov 13 13:44:22 vm3 crmd[469]:     info: peer_update_callback: vm3 is now member (was (null))
Nov 13 13:44:22 vm3 crmd[469]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Nov 13 13:44:22 vm3 crmd[469]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 13 13:44:22 vm3 crmd[469]:     info: do_started: Delaying start, Config not read (0000000000000040)
Nov 13 13:44:22 vm3 crmd[469]:     info: qb_ipcs_us_publish: server name: crmd
Nov 13 13:44:22 vm3 crmd[469]:   notice: do_started: The local CRM is operational
Nov 13 13:44:22 vm3 crmd[469]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Nov 13 13:44:22 vm3 crmd[469]:   notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Nov 13 13:44:22 vm3 cib[464]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.0.0)
Nov 13 13:44:23 vm3 crmd[469]:     info: pcmk_cpg_membership: Joined[0.0] crmd.3232261519 
Nov 13 13:44:23 vm3 crmd[469]:     info: pcmk_cpg_membership: Member[0.0] crmd.3232261517 
Nov 13 13:44:23 vm3 crmd[469]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Nov 13 13:44:23 vm3 crmd[469]:     info: pcmk_cpg_membership: Member[0.1] crmd.3232261518 
Nov 13 13:44:23 vm3 crmd[469]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Nov 13 13:44:23 vm3 crmd[469]:     info: pcmk_cpg_membership: Member[0.2] crmd.3232261519 
Nov 13 13:44:23 vm3 crmd[469]:     info: crm_get_peer: Node 3232261518 is now known as vm2
Nov 13 13:44:23 vm3 crmd[469]:     info: peer_update_callback: vm2 is now member
Nov 13 13:44:23 vm3 crmd[469]:     info: crm_get_peer: Node 3232261517 is now known as vm1
Nov 13 13:44:23 vm3 crmd[469]:     info: peer_update_callback: vm1 is now member
Nov 13 13:44:24 vm3 stonith-ng[465]:     info: crm_client_new: Connecting 0x1b0bc30 for uid=496 gid=492 pid=469 id=f2a5be16-b954-4398-be77-60a78e6c70a6
Nov 13 13:44:24 vm3 stonith-ng[465]:     info: stonith_command: Processed register from crmd.469: OK (0)
Nov 13 13:44:24 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify from crmd.469: OK (0)
Nov 13 13:44:24 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify from crmd.469: OK (0)
Nov 13 13:44:42 vm3 crmd[469]:     info: election_count_vote: Election 1 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:44:42 vm3 crmd[469]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Nov 13 13:44:43 vm3 cib[464]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Nov 13 13:44:43 vm3 cib[464]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 13 13:44:43 vm3 cib[464]:     info: crm_get_peer: Node 3232261517 is now known as vm1
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section cib: OK (rc=0, origin=vm1/crmd/7, version=0.0.1)
Nov 13 13:44:43 vm3 crmd[469]:     info: update_dc: Set DC to vm1 (3.0.8)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/9, version=0.1.1)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.1.1)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/7, version=0.1.1)
Nov 13 13:44:43 vm3 crmd[469]:     info: election_count_vote: Election 2 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:44:43 vm3 crmd[469]:     info: update_dc: Unset DC. Was vm1
Nov 13 13:44:43 vm3 crmd[469]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Nov 13 13:44:43 vm3 crmd[469]:     info: update_dc: Set DC to vm1 (3.0.8)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/11, version=0.2.1)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/8, version=0.2.1)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.2.1)
Nov 13 13:44:43 vm3 crmd[469]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm3']/transient_attributes
Nov 13 13:44:43 vm3 crmd[469]:     info: update_attrd_helper: Connecting to attrd... 5 retries remaining
Nov 13 13:44:43 vm3 attrd[467]:     info: crm_client_new: Connecting 0x1cd1010 for uid=496 gid=492 pid=469 id=45bf9972-26c8-429f-bfaa-3b8d09a6e952
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_replace: Digest matched on replace from vm1: 360fde11e7cf93696f974eea17cffd9b
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_replace: Replaced 0.2.1 with 0.2.1 from vm1
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=vm1/crmd/17, version=0.2.1)
Nov 13 13:44:43 vm3 crmd[469]:     info: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Nov 13 13:44:43 vm3 crmd[469]:   notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 13 13:44:43 vm3 attrd[467]:     info: attrd_client_message: Starting an election to determine the writer
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/18, version=0.3.1)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/19, version=0.4.1)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='vm3']/transient_attributes to master (origin=local/crmd/10)
Nov 13 13:44:43 vm3 cib[478]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-0.raw
Nov 13 13:44:43 vm3 attrd[467]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Nov 13 13:44:43 vm3 attrd[467]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 13 13:44:43 vm3 attrd[467]:     info: attrd_client_message: Broadcasting terminate[vm3] = (null)
Nov 13 13:44:43 vm3 attrd[467]:     info: attrd_client_message: Broadcasting shutdown[vm3] = (null)
Nov 13 13:44:43 vm3 cib[478]:     info: write_cib_contents: Wrote version 0.1.0 of the CIB to disk (digest: b8fe3a8159b940d26780cf9ea797cc0e)
Nov 13 13:44:43 vm3 cib[478]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.lZ8hFu (digest: /var/lib/pacemaker/cib/cib.gsdjuG)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/20, version=0.5.1)
Nov 13 13:44:43 vm3 attrd[467]:     info: crm_get_peer: Node 3232261517 is now known as vm1
Nov 13 13:44:43 vm3 attrd[467]:     info: election_count_vote: Election 1 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/23, version=0.5.2)
Nov 13 13:44:43 vm3 cib[464]:     info: crm_get_peer: Node 3232261518 is now known as vm2
Nov 13 13:44:43 vm3 attrd[467]:     info: election_count_vote: Election 2 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/25, version=0.5.3)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/27, version=0.5.4)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section cib: OK (rc=0, origin=vm1/crmd/30, version=0.5.5)
Nov 13 13:44:43 vm3 attrd[467]:     info: crm_get_peer: Node 3232261518 is now known as vm2
Nov 13 13:44:43 vm3 attrd[467]:     info: election_count_vote: Election 1 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 vm3 attrd[467]:     info: election_count_vote: Election 2 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 vm3 cib[479]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-1.raw
Nov 13 13:44:43 vm3 attrd[467]:     info: election_count_vote: Election 3 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 vm3 attrd[467]:     info: election_count_vote: Election 4 (owner: 3232261518) lost: vote from vm2 (Uptime)
Nov 13 13:44:43 vm3 cib[479]:     info: write_cib_contents: Wrote version 0.5.0 of the CIB to disk (digest: 630d79f602055b52fd2ea79fdbd1baf8)
Nov 13 13:44:43 vm3 attrd[467]:     info: attrd_client_message: Broadcasting probe_complete[vm3] = true
Nov 13 13:44:43 vm3 cib[479]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.wdhXOy (digest: /var/lib/pacemaker/cib/cib.VixlNK)
Nov 13 13:44:43 vm3 attrd[467]:   notice: attrd_peer_message: Processing sync-response from vm2
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/2, version=0.5.6)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/4, version=0.5.7)
Nov 13 13:44:43 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/5, version=0.5.8)
Nov 13 13:44:52 vm3 crmd[469]:     info: throttle_send_command: Updated throttle state to 0000
Nov 13 13:45:33 vm3 crmd[469]:     info: election_count_vote: Election 3 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:45:33 vm3 crmd[469]:     info: update_dc: Unset DC. Was vm1
Nov 13 13:45:33 vm3 crmd[469]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Nov 13 13:45:33 vm3 crmd[469]:   notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/11, version=0.5.8)
Nov 13 13:45:33 vm3 stonith-ng[465]:     info: stonith_level_remove: Node vm3 not found (0 active entries)
Nov 13 13:45:33 vm3 stonith-ng[465]:     info: stonith_level_register: Node vm3 has 1 active fencing levels
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section 'all': OK (rc=0, origin=vm1/cibadmin/2, version=0.6.1)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.6.1)
Nov 13 13:45:33 vm3 stonith-ng[465]:     info: update_cib_stonith_devices: Updating device list from the cib: new resource
Nov 13 13:45:33 vm3 stonith-ng[465]:  warning: handle_startup_fencing: Blind faith: not fencing unseen nodes
Nov 13 13:45:33 vm3 stonith-ng[465]:     info: cib_device_update: Device F1 has been disabled on vm3: score=-INFINITY
Nov 13 13:45:33 vm3 cib[487]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-2.raw
Nov 13 13:45:33 vm3 cib[487]:     info: write_cib_contents: Wrote version 0.6.0 of the CIB to disk (digest: 2db643db6cb3c3f1825600265949deb4)
Nov 13 13:45:33 vm3 crmd[469]:     info: update_dc: Set DC to vm1 (3.0.8)
Nov 13 13:45:33 vm3 crmd[469]:     info: election_count_vote: Election 4 (owner: 3232261517) lost: vote from vm1 (Uptime)
Nov 13 13:45:33 vm3 crmd[469]:     info: update_dc: Unset DC. Was vm1
Nov 13 13:45:33 vm3 crmd[469]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Nov 13 13:45:33 vm3 crmd[469]:     info: update_dc: Set DC to vm1 (3.0.8)
Nov 13 13:45:33 vm3 cib[487]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.2qZrBi (digest: /var/lib/pacemaker/cib/cib.T7iEhI)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/39, version=0.7.1)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.7.1)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/14, version=0.7.1)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/15, version=0.7.1)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/41, version=0.8.1)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/16, version=0.8.1)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_replace: Digest matched on replace from vm1: b65668c649a0f8a465a42db6c017bc19
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_replace: Replaced 0.8.1 with 0.8.1 from vm1
Nov 13 13:45:33 vm3 cib[488]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-3.raw
Nov 13 13:45:33 vm3 crmd[469]:     info: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Nov 13 13:45:33 vm3 crmd[469]:   notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=vm1/crmd/47, version=0.8.1)
Nov 13 13:45:33 vm3 cib[488]:     info: write_cib_contents: Wrote version 0.8.0 of the CIB to disk (digest: 9db35554f5ac4e48336f1bae33d89abc)
Nov 13 13:45:33 vm3 cib[488]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.ivmZKn (digest: /var/lib/pacemaker/cib/cib.fbDlGN)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section //node_state[@uname='vm3']/lrm: OK (rc=0, origin=vm1/crmd/51, version=0.8.2)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/52, version=0.8.3)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=vm1/crmd/53, version=0.8.4)
Nov 13 13:45:33 vm3 cib[489]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-4.raw
Nov 13 13:45:33 vm3 cib[489]:     info: write_cib_contents: Wrote version 0.8.0 of the CIB to disk (digest: 9db35554f5ac4e48336f1bae33d89abc)
Nov 13 13:45:33 vm3 cib[489]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.yntCRv (digest: /var/lib/pacemaker/cib/cib.cKNXXV)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/54, version=0.8.5)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section //node_state[@uname='vm2']/lrm: OK (rc=0, origin=vm1/crmd/55, version=0.8.6)
Nov 13 13:45:33 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/56, version=0.8.7)
Nov 13 13:45:35 vm3 lrmd[466]:     info: process_lrmd_get_rsc_info: Resource 'F1' not found (0 active resources)
Nov 13 13:45:35 vm3 lrmd[466]:     info: process_lrmd_rsc_register: Added 'F1' to the rsc list (1 active resources)
Nov 13 13:45:35 vm3 crmd[469]:     info: do_lrm_rsc_op: Performing key=10:1:7:154fb289-24e8-407e-9a03-69a510480b60 op=F1_monitor_0
Nov 13 13:45:35 vm3 stonith-ng[465]:     info: crm_client_new: Connecting 0x1b3c150 for uid=0 gid=0 pid=466 id=68ee14d5-71bb-4403-b659-fc37fabfd715
Nov 13 13:45:35 vm3 stonith-ng[465]:     info: stonith_command: Processed register from lrmd.466: OK (0)
Nov 13 13:45:35 vm3 lrmd[466]:     info: process_lrmd_get_rsc_info: Resource 'pDummy' not found (1 active resources)
Nov 13 13:45:35 vm3 lrmd[466]:     info: process_lrmd_rsc_register: Added 'pDummy' to the rsc list (2 active resources)
Nov 13 13:45:35 vm3 crmd[469]:     info: do_lrm_rsc_op: Performing key=11:1:7:154fb289-24e8-407e-9a03-69a510480b60 op=pDummy_monitor_0
Nov 13 13:45:35 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify from lrmd.466: OK (0)
Nov 13 13:45:35 vm3 Dummy(pDummy)[490]: DEBUG: pDummy monitor : 7
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/17, version=0.8.8)
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/61, version=0.8.9)
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/17)
Nov 13 13:45:36 vm3 crmd[469]:     info: process_lrm_event: LRM operation F1_monitor_0 (call=5, rc=7, cib-update=17, confirmed=true) not running
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/17, version=0.8.10)
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/18, version=0.8.11)
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/62, version=0.8.12)
Nov 13 13:45:36 vm3 crmd[469]:     info: services_os_action_execute: Managed Dummy_meta-data_0 process 512 exited with rc=0
Nov 13 13:45:36 vm3 crmd[469]:   notice: process_lrm_event: LRM operation pDummy_monitor_0 (call=9, rc=7, cib-update=18, confirmed=true) not running
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/18)
Nov 13 13:45:36 vm3 attrd[467]:     info: attrd_client_message: Broadcasting probe_complete[vm3] = true
Nov 13 13:45:36 vm3 crmd[469]:     info: do_lrm_rsc_op: Performing key=14:1:0:154fb289-24e8-407e-9a03-69a510480b60 op=pDummy_start_0
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/18, version=0.8.13)
Nov 13 13:45:36 vm3 lrmd[466]:     info: log_execute: executing - rsc:pDummy action:start call_id:10
Nov 13 13:45:36 vm3 Dummy(pDummy)[516]: DEBUG: pDummy start : 0
Nov 13 13:45:36 vm3 lrmd[466]:     info: log_finished: finished - rsc:pDummy action:start call_id:10 pid:516 exit-code:0 exec-time:51ms queue-time:0ms
Nov 13 13:45:36 vm3 crmd[469]:   notice: process_lrm_event: LRM operation pDummy_start_0 (call=10, rc=0, cib-update=19, confirmed=true) ok
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/19)
Nov 13 13:45:36 vm3 crmd[469]:     info: do_lrm_rsc_op: Performing key=15:1:0:154fb289-24e8-407e-9a03-69a510480b60 op=pDummy_monitor_10000
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/19, version=0.8.14)
Nov 13 13:45:36 vm3 Dummy(pDummy)[525]: DEBUG: pDummy monitor : 0
Nov 13 13:45:36 vm3 crmd[469]:   notice: process_lrm_event: LRM operation pDummy_monitor_10000 (call=11, rc=0, cib-update=20, confirmed=false) ok
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/20)
Nov 13 13:45:36 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/20, version=0.8.15)
Nov 13 13:45:38 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/63, version=0.8.16)
Nov 13 13:45:39 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/64, version=0.8.17)
Nov 13 13:45:46 vm3 Dummy(pDummy)[552]: DEBUG: pDummy monitor : 0
Nov 13 13:45:56 vm3 Dummy(pDummy)[561]: DEBUG: pDummy monitor : 0
Nov 13 13:46:06 vm3 Dummy(pDummy)[569]: DEBUG: pDummy monitor : 0
Nov 13 13:46:16 vm3 Dummy(pDummy)[577]: DEBUG: pDummy monitor : 0
Nov 13 13:46:26 vm3 Dummy(pDummy)[586]: DEBUG: pDummy monitor : 0
Nov 13 13:46:36 vm3 Dummy(pDummy)[600]: DEBUG: pDummy monitor : 0
Nov 13 13:46:46 vm3 Dummy(pDummy)[608]: DEBUG: pDummy monitor : 0
Nov 13 13:46:56 vm3 Dummy(pDummy)[616]: DEBUG: pDummy monitor : 0
Nov 13 13:47:06 vm3 Dummy(pDummy)[661]: DEBUG: pDummy monitor : 7
Nov 13 13:47:06 vm3 crmd[469]:   notice: process_lrm_event: LRM operation pDummy_monitor_10000 (call=11, rc=7, cib-update=21, confirmed=false) not running
Nov 13 13:47:06 vm3 cib[464]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/21)
Nov 13 13:47:06 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/21, version=0.8.18)
Nov 13 13:47:06 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/6, version=0.8.19)
Nov 13 13:47:06 vm3 cib[464]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/7, version=0.8.20)
Nov 13 13:47:08 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:12 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.696fb2c3: Generic Pacemaker error
Nov 13 13:47:12 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=696fb2c3-e11a-4124-ba9b-bafc9ab28426) by client crmd.15883
Nov 13 13:47:12 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:14 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:16 vm3 Dummy(pDummy)[669]: DEBUG: pDummy monitor : 7
Nov 13 13:47:17 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.431c7488: Generic Pacemaker error
Nov 13 13:47:17 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=431c7488-013e-4900-bde7-a3ce154b35a3) by client crmd.15883
Nov 13 13:47:17 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:19 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:22 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.682bdc12: Generic Pacemaker error
Nov 13 13:47:22 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=682bdc12-35a4-431a-8773-4862cc8c39ef) by client crmd.15883
Nov 13 13:47:22 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:24 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:26 vm3 Dummy(pDummy)[677]: DEBUG: pDummy monitor : 7
Nov 13 13:47:27 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.d761e73f: Generic Pacemaker error
Nov 13 13:47:27 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=d761e73f-f337-48cc-b2a1-5b2d722d2738) by client crmd.15883
Nov 13 13:47:27 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:29 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:33 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.11df91ab: Generic Pacemaker error
Nov 13 13:47:33 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=11df91ab-fc81-43aa-941d-ffa1204df1c9) by client crmd.15883
Nov 13 13:47:33 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:35 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:36 vm3 Dummy(pDummy)[695]: DEBUG: pDummy monitor : 7
Nov 13 13:47:38 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.84777767: Generic Pacemaker error
Nov 13 13:47:38 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=84777767-aa8b-4e04-8dec-b26dae36aaff) by client crmd.15883
Nov 13 13:47:38 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:40 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:43 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.588ca7d3: Generic Pacemaker error
Nov 13 13:47:43 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=588ca7d3-cb01-4e1b-9c7d-5fcdd5b66a27) by client crmd.15883
Nov 13 13:47:43 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:45 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:46 vm3 Dummy(pDummy)[704]: DEBUG: pDummy monitor : 7
Nov 13 13:47:48 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.a3379e0c: Generic Pacemaker error
Nov 13 13:47:48 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=a3379e0c-d206-4ced-9e7e-1c915f08a0ae) by client crmd.15883
Nov 13 13:47:48 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:50 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:54 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.9ab4c26b: Generic Pacemaker error
Nov 13 13:47:54 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=9ab4c26b-da3e-40cd-ba98-c89017db4953) by client crmd.15883
Nov 13 13:47:54 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:47:56 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:56 vm3 Dummy(pDummy)[713]: DEBUG: pDummy monitor : 7
Nov 13 13:47:59 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.1ba836f2: Generic Pacemaker error
Nov 13 13:47:59 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=1ba836f2-328d-45c7-adbb-1db9b0a1ca4c) by client crmd.15883
Nov 13 13:47:59 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:48:01 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:48:04 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.00825b71: Generic Pacemaker error
Nov 13 13:48:04 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=00825b71-24e3-4f14-a0b8-6945f050dfd1) by client crmd.15883
Nov 13 13:48:04 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 13:48:06 vm3 Dummy(pDummy)[740]: DEBUG: pDummy monitor : 7
Nov 13 13:48:17 vm3 Dummy(pDummy)[749]: DEBUG: pDummy monitor : 7
Nov 13 13:48:27 vm3 Dummy(pDummy)[776]: DEBUG: pDummy monitor : 7
Nov 13 13:48:37 vm3 Dummy(pDummy)[790]: DEBUG: pDummy monitor : 7
Nov 13 13:48:47 vm3 Dummy(pDummy)[798]: DEBUG: pDummy monitor : 7
Nov 13 13:48:57 vm3 Dummy(pDummy)[824]: DEBUG: pDummy monitor : 7
Nov 13 13:49:07 vm3 Dummy(pDummy)[832]: DEBUG: pDummy monitor : 7
Nov 13 13:49:17 vm3 Dummy(pDummy)[840]: DEBUG: pDummy monitor : 7
Nov 13 13:49:27 vm3 Dummy(pDummy)[848]: DEBUG: pDummy monitor : 7
Nov 13 13:49:37 vm3 Dummy(pDummy)[863]: DEBUG: pDummy monitor : 7
Nov 13 13:49:47 vm3 Dummy(pDummy)[871]: DEBUG: pDummy monitor : 7
Nov 13 13:49:57 vm3 Dummy(pDummy)[879]: DEBUG: pDummy monitor : 7
Nov 13 13:50:07 vm3 Dummy(pDummy)[890]: DEBUG: pDummy monitor : 7
Nov 13 13:50:17 vm3 Dummy(pDummy)[898]: DEBUG: pDummy monitor : 7
Nov 13 13:50:27 vm3 Dummy(pDummy)[906]: DEBUG: pDummy monitor : 7
Nov 13 13:50:37 vm3 Dummy(pDummy)[920]: DEBUG: pDummy monitor : 7
Nov 13 13:50:47 vm3 Dummy(pDummy)[929]: DEBUG: pDummy monitor : 7
Nov 13 13:50:57 vm3 Dummy(pDummy)[937]: DEBUG: pDummy monitor : 7
Nov 13 13:51:07 vm3 Dummy(pDummy)[945]: DEBUG: pDummy monitor : 7
Nov 13 13:51:17 vm3 Dummy(pDummy)[953]: DEBUG: pDummy monitor : 7
Nov 13 13:51:27 vm3 Dummy(pDummy)[962]: DEBUG: pDummy monitor : 7
Nov 13 13:51:37 vm3 Dummy(pDummy)[976]: DEBUG: pDummy monitor : 7
Nov 13 13:51:47 vm3 Dummy(pDummy)[984]: DEBUG: pDummy monitor : 7
Nov 13 13:51:58 vm3 Dummy(pDummy)[992]: DEBUG: pDummy monitor : 7
Nov 13 13:52:08 vm3 Dummy(pDummy)[1001]: DEBUG: pDummy monitor : 7
Nov 13 13:52:18 vm3 Dummy(pDummy)[1009]: DEBUG: pDummy monitor : 7
Nov 13 13:52:28 vm3 Dummy(pDummy)[1019]: DEBUG: pDummy monitor : 7
Nov 13 13:52:38 vm3 Dummy(pDummy)[1034]: DEBUG: pDummy monitor : 7
Nov 13 13:52:48 vm3 Dummy(pDummy)[1042]: DEBUG: pDummy monitor : 7
Nov 13 13:52:58 vm3 Dummy(pDummy)[1050]: DEBUG: pDummy monitor : 7
Nov 13 13:53:08 vm3 Dummy(pDummy)[1058]: DEBUG: pDummy monitor : 7
Nov 13 13:53:18 vm3 Dummy(pDummy)[1069]: DEBUG: pDummy monitor : 7
Nov 13 13:53:28 vm3 Dummy(pDummy)[1077]: DEBUG: pDummy monitor : 7
Nov 13 13:53:38 vm3 Dummy(pDummy)[1091]: DEBUG: pDummy monitor : 7
Nov 13 13:53:48 vm3 Dummy(pDummy)[1099]: DEBUG: pDummy monitor : 7
Nov 13 13:53:58 vm3 Dummy(pDummy)[1109]: DEBUG: pDummy monitor : 7
Nov 13 13:54:08 vm3 Dummy(pDummy)[1117]: DEBUG: pDummy monitor : 7
Nov 13 13:54:18 vm3 Dummy(pDummy)[1125]: DEBUG: pDummy monitor : 7
Nov 13 13:54:28 vm3 Dummy(pDummy)[1133]: DEBUG: pDummy monitor : 7
Nov 13 13:54:38 vm3 Dummy(pDummy)[1148]: DEBUG: pDummy monitor : 7
Nov 13 13:54:48 vm3 Dummy(pDummy)[1156]: DEBUG: pDummy monitor : 7
Nov 13 13:54:58 vm3 Dummy(pDummy)[1164]: DEBUG: pDummy monitor : 7
Nov 13 13:55:08 vm3 Dummy(pDummy)[1173]: DEBUG: pDummy monitor : 7
Nov 13 13:55:18 vm3 Dummy(pDummy)[1181]: DEBUG: pDummy monitor : 7
Nov 13 13:55:28 vm3 Dummy(pDummy)[1189]: DEBUG: pDummy monitor : 7
Nov 13 13:55:39 vm3 Dummy(pDummy)[1203]: DEBUG: pDummy monitor : 7
Nov 13 13:55:49 vm3 Dummy(pDummy)[1212]: DEBUG: pDummy monitor : 7
Nov 13 13:55:59 vm3 Dummy(pDummy)[1220]: DEBUG: pDummy monitor : 7
Nov 13 13:56:09 vm3 Dummy(pDummy)[1228]: DEBUG: pDummy monitor : 7
Nov 13 13:56:19 vm3 Dummy(pDummy)[1236]: DEBUG: pDummy monitor : 7
Nov 13 13:56:29 vm3 Dummy(pDummy)[1245]: DEBUG: pDummy monitor : 7
Nov 13 13:56:39 vm3 Dummy(pDummy)[1259]: DEBUG: pDummy monitor : 7
Nov 13 13:56:49 vm3 Dummy(pDummy)[1267]: DEBUG: pDummy monitor : 7
Nov 13 13:56:59 vm3 Dummy(pDummy)[1275]: DEBUG: pDummy monitor : 7
Nov 13 13:57:09 vm3 Dummy(pDummy)[1284]: DEBUG: pDummy monitor : 7
Nov 13 13:57:19 vm3 Dummy(pDummy)[1293]: DEBUG: pDummy monitor : 7
Nov 13 13:57:29 vm3 Dummy(pDummy)[1301]: DEBUG: pDummy monitor : 7
Nov 13 13:57:39 vm3 Dummy(pDummy)[1316]: DEBUG: pDummy monitor : 7
Nov 13 13:57:49 vm3 Dummy(pDummy)[1324]: DEBUG: pDummy monitor : 7
Nov 13 13:57:59 vm3 Dummy(pDummy)[1332]: DEBUG: pDummy monitor : 7
Nov 13 13:58:09 vm3 Dummy(pDummy)[1340]: DEBUG: pDummy monitor : 7
Nov 13 13:58:19 vm3 Dummy(pDummy)[1349]: DEBUG: pDummy monitor : 7
Nov 13 13:58:29 vm3 Dummy(pDummy)[1357]: DEBUG: pDummy monitor : 7
Nov 13 13:58:39 vm3 Dummy(pDummy)[1371]: DEBUG: pDummy monitor : 7
Nov 13 13:58:49 vm3 Dummy(pDummy)[1379]: DEBUG: pDummy monitor : 7
Nov 13 13:58:59 vm3 Dummy(pDummy)[1388]: DEBUG: pDummy monitor : 7
Nov 13 13:59:09 vm3 Dummy(pDummy)[1396]: DEBUG: pDummy monitor : 7
Nov 13 13:59:20 vm3 Dummy(pDummy)[1404]: DEBUG: pDummy monitor : 7
Nov 13 13:59:30 vm3 Dummy(pDummy)[1412]: DEBUG: pDummy monitor : 7
Nov 13 13:59:40 vm3 Dummy(pDummy)[1432]: DEBUG: pDummy monitor : 7
Nov 13 13:59:50 vm3 Dummy(pDummy)[1440]: DEBUG: pDummy monitor : 7
Nov 13 14:00:00 vm3 Dummy(pDummy)[1448]: DEBUG: pDummy monitor : 7
Nov 13 14:00:10 vm3 Dummy(pDummy)[1459]: DEBUG: pDummy monitor : 7
Nov 13 14:00:20 vm3 Dummy(pDummy)[1467]: DEBUG: pDummy monitor : 7
Nov 13 14:00:30 vm3 Dummy(pDummy)[1476]: DEBUG: pDummy monitor : 7
Nov 13 14:00:40 vm3 Dummy(pDummy)[1491]: DEBUG: pDummy monitor : 7
Nov 13 14:00:50 vm3 Dummy(pDummy)[1500]: DEBUG: pDummy monitor : 7
Nov 13 14:01:00 vm3 Dummy(pDummy)[1509]: DEBUG: pDummy monitor : 7
Nov 13 14:01:10 vm3 Dummy(pDummy)[1528]: DEBUG: pDummy monitor : 7
Nov 13 14:01:20 vm3 Dummy(pDummy)[1537]: DEBUG: pDummy monitor : 7
Nov 13 14:01:30 vm3 Dummy(pDummy)[1547]: DEBUG: pDummy monitor : 7
Nov 13 14:01:40 vm3 Dummy(pDummy)[1562]: DEBUG: pDummy monitor : 7
Nov 13 14:01:50 vm3 Dummy(pDummy)[1570]: DEBUG: pDummy monitor : 7
Nov 13 14:02:00 vm3 Dummy(pDummy)[1578]: DEBUG: pDummy monitor : 7
Nov 13 14:02:10 vm3 Dummy(pDummy)[1588]: DEBUG: pDummy monitor : 7
Nov 13 14:02:20 vm3 Dummy(pDummy)[1598]: DEBUG: pDummy monitor : 7
Nov 13 14:02:30 vm3 Dummy(pDummy)[1606]: DEBUG: pDummy monitor : 7
Nov 13 14:02:40 vm3 Dummy(pDummy)[1621]: DEBUG: pDummy monitor : 7
Nov 13 14:02:50 vm3 Dummy(pDummy)[1629]: DEBUG: pDummy monitor : 7
Nov 13 14:03:01 vm3 Dummy(pDummy)[1638]: DEBUG: pDummy monitor : 7
Nov 13 14:03:04 vm3 stonith-ng[465]:     info: stonith_command: Processed st_query from vm1: OK (0)
Nov 13 14:03:07 vm3 stonith-ng[465]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.15883@vm1.893bcd8c: Generic Pacemaker error
Nov 13 14:03:07 vm3 crmd[469]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=893bcd8c-11ea-4f2c-b5d5-e2c9d3883c1b) by client crmd.15883
Nov 13 14:03:07 vm3 stonith-ng[465]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Nov 13 14:03:11 vm3 Dummy(pDummy)[1647]: DEBUG: pDummy monitor : 7
