Oct 15 15:15:47 vm2 corosync[9145]:   [MAIN  ] main.c:main:1171 Corosync Cluster Engine ('2.3.2.4-805b3'): started and ready to provide service.
Oct 15 15:15:47 vm2 corosync[9145]:   [MAIN  ] main.c:main:1172 Corosync built-in features: watchdog upstart snmp pie relro bindnow
Oct 15 15:15:47 vm2 corosync[9146]:   [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Oct 15 15:15:47 vm2 corosync[9146]:   [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Oct 15 15:15:47 vm2 corosync[9146]:   [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Oct 15 15:15:47 vm2 corosync[9146]:   [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Oct 15 15:15:47 vm2 corosync[9146]:   [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.101.142] is now up.
Oct 15 15:15:47 vm2 corosync[9146]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration map access [0]
Oct 15 15:15:47 vm2 corosync[9146]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:374 server name: cmap
Oct 15 15:15:47 vm2 corosync[9146]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration service [1]
Oct 15 15:15:47 vm2 corosync[9146]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:374 server name: cfg
Oct 15 15:15:47 vm2 corosync[9146]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster closed process group service v1.01 [2]
Oct 15 15:15:47 vm2 corosync[9146]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:374 server name: cpg
Oct 15 15:15:47 vm2 corosync[9146]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync profile loading service [4]
Oct 15 15:15:47 vm2 corosync[9146]:   [WD    ] wd.c:setup_watchdog:651 Watchdog is now been tickled by corosync.
Oct 15 15:15:47 vm2 corosync[9146]:   [WD    ] wd.c:wd_scan_resources:580 no resources configured.
Oct 15 15:15:47 vm2 corosync[9146]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync watchdog service [7]
Oct 15 15:15:47 vm2 corosync[9146]:   [QUORUM] vsf_quorum.c:quorum_exec_init_fn:274 Using quorum provider corosync_votequorum
Oct 15 15:15:47 vm2 corosync[9146]:   [VOTEQ ] votequorum.c:are_we_quorate:744 Waiting for all cluster members. Current votes: 1 expected_votes: 2
Oct 15 15:15:47 vm2 corosync[9146]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync vote quorum service v1.0 [5]
Oct 15 15:15:47 vm2 corosync[9146]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:374 server name: votequorum
Oct 15 15:15:47 vm2 corosync[9146]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster quorum service v0.1 [3]
Oct 15 15:15:47 vm2 corosync[9146]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:374 server name: quorum
Oct 15 15:15:47 vm2 corosync[9146]:   [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.102.142] is now up.
Oct 15 15:15:48 vm2 corosync[9146]:   [TOTEM ] totemsrp.c:memb_state_operational_enter:1966 A new membership (192.168.101.142:4) was formed. Members joined: -1062705778
Oct 15 15:15:48 vm2 corosync[9146]:   [VOTEQ ] votequorum.c:are_we_quorate:744 Waiting for all cluster members. Current votes: 1 expected_votes: 2
Oct 15 15:15:48 vm2 corosync[9146]:   [VOTEQ ] votequorum.c:are_we_quorate:744 Waiting for all cluster members. Current votes: 1 expected_votes: 2
Oct 15 15:15:48 vm2 corosync[9146]:   [VOTEQ ] votequorum.c:are_we_quorate:744 Waiting for all cluster members. Current votes: 1 expected_votes: 2
Oct 15 15:15:48 vm2 corosync[9146]:   [QUORUM] vsf_quorum.c:log_view_list:132 Members[1]: -1062705778
Oct 15 15:15:48 vm2 corosync[9146]:   [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Oct 15 15:15:49 vm2 corosync[9146]:   [TOTEM ] totemsrp.c:memb_state_operational_enter:1966 A new membership (192.168.101.141:12) was formed. Members joined: -1062705779 -1062705777
Oct 15 15:15:49 vm2 corosync[9146]:   [VOTEQ ] votequorum.c:are_we_quorate:744 Waiting for all cluster members. Current votes: 1 expected_votes: 2
Oct 15 15:15:49 vm2 corosync[9146]:   [QUORUM] vsf_quorum.c:quorum_api_set_quorum:148 This node is within the primary component and will provide service.
Oct 15 15:15:49 vm2 corosync[9146]:   [QUORUM] vsf_quorum.c:log_view_list:132 Members[3]: -1062705779 -1062705778 -1062705777
Oct 15 15:15:49 vm2 corosync[9146]:   [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Oct 15 15:15:50 vm2 pacemakerd[9155]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_ipc_connect: Could not establish pacemakerd connection: Connection refused (111)
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: get_cluster_type: Detected an active 'corosync' cluster
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: mcp_read_config: Reading configure for stack: corosync
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: mcp_read_config: Configured corosync to accept connections from group 492: OK (1)
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: main: Starting Pacemaker 1.1.11-0.284.6a5e863.git.el6 (Build: 6a5e863):  generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: main: Maximum core file size is: 18446744073709551615
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: qb_ipcs_us_publish: server name: pacemakerd
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_get_peer: Created entry 4a70838f-0cd8-4afe-81b7-f4248d1205ff/0x1684140 for node (null)/3232261518 (1 total)
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_get_peer: Node 3232261518 has uuid 3232261518
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: cluster_connect_quorum: Quorum acquired
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_get_peer: Node 3232261518 is now known as vm2
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: start_child: Using uid=496 and group=492 for process cib
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: start_child: Forked child 9159 for process cib
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: start_child: Forked child 9160 for process stonith-ng
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: start_child: Forked child 9161 for process lrmd
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: start_child: Using uid=496 and group=492 for process attrd
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: start_child: Forked child 9162 for process attrd
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: start_child: Using uid=496 and group=492 for process pengine
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: start_child: Forked child 9163 for process pengine
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: start_child: Using uid=496 and group=492 for process crmd
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: start_child: Forked child 9164 for process crmd
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: main: Starting mainloop
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: pcmk_quorum_notification: Membership 12: quorum retained (3)
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_get_peer: Created entry 1b04417e-9ed6-4e1f-9750-c4f235d63972/0x17863d0 for node (null)/3232261517 (2 total)
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_get_peer: Node 3232261517 has uuid 3232261517
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261517
Oct 15 15:15:50 vm2 stonith-ng[9160]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Oct 15 15:15:50 vm2 cib[9159]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Oct 15 15:15:50 vm2 cib[9159]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Oct 15 15:15:50 vm2 stonith-ng[9160]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Oct 15 15:15:50 vm2 stonith-ng[9160]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Oct 15 15:15:50 vm2 stonith-ng[9160]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Oct 15 15:15:50 vm2 stonith-ng[9160]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Oct 15 15:15:50 vm2 stonith-ng[9160]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Oct 15 15:15:50 vm2 cib[9159]:   notice: main: Using new config location: /var/lib/pacemaker/cib
Oct 15 15:15:50 vm2 cib[9159]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Oct 15 15:15:50 vm2 cib[9159]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Oct 15 15:15:50 vm2 cib[9159]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
Oct 15 15:15:50 vm2 cib[9159]:  warning: retrieveCib: Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml
Oct 15 15:15:50 vm2 cib[9159]:  warning: readCibXmlFile: Primary configuration corrupt or unusable, trying backups in /var/lib/pacemaker/cib
Oct 15 15:15:50 vm2 cib[9159]:  warning: readCibXmlFile: Continuing with an empty configuration.
Oct 15 15:15:50 vm2 cib[9159]:     info: validate_with_relaxng: Creating RNG parser context
Oct 15 15:15:50 vm2 attrd[9162]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Oct 15 15:15:50 vm2 attrd[9162]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Oct 15 15:15:50 vm2 lrmd[9161]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Oct 15 15:15:50 vm2 lrmd[9161]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Oct 15 15:15:50 vm2 lrmd[9161]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Oct 15 15:15:50 vm2 lrmd[9161]:     info: qb_ipcs_us_publish: server name: lrmd
Oct 15 15:15:50 vm2 lrmd[9161]:     info: main: Starting
Oct 15 15:15:50 vm2 attrd[9162]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Oct 15 15:15:50 vm2 attrd[9162]:     info: main: Starting up
Oct 15 15:15:50 vm2 attrd[9162]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Oct 15 15:15:50 vm2 attrd[9162]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Oct 15 15:15:50 vm2 attrd[9162]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Oct 15 15:15:50 vm2 pengine[9163]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Oct 15 15:15:50 vm2 pengine[9163]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Oct 15 15:15:50 vm2 pengine[9163]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Oct 15 15:15:50 vm2 pengine[9163]:     info: qb_ipcs_us_publish: server name: pengine
Oct 15 15:15:50 vm2 crmd[9164]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Oct 15 15:15:50 vm2 crmd[9164]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Oct 15 15:15:50 vm2 pengine[9163]:     info: main: Starting pengine
Oct 15 15:15:50 vm2 crmd[9164]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Oct 15 15:15:50 vm2 crmd[9164]:   notice: main: CRM Git Version: 6a5e863
Oct 15 15:15:50 vm2 crmd[9164]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Oct 15 15:15:50 vm2 crmd[9164]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Oct 15 15:15:50 vm2 crmd[9164]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Oct 15 15:15:50 vm2 crmd[9164]:     info: crm_ipc_connect: Could not establish cib_shm connection: Connection refused (111)
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261517
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261517] - state is now member (was (null))
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm2[3232261518] - state is now member (was (null))
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_get_peer: Created entry 72ea006f-c99c-4eac-843e-c7bd0e77f36c/0x1785bb0 for node (null)/3232261519 (3 total)
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_get_peer: Node 3232261519 has uuid 3232261519
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261519
Oct 15 15:15:50 vm2 stonith-ng[9160]:     info: crm_get_peer: Created entry a720d2db-7ec4-48aa-9594-b9dc9418fb95/0x15b3660 for node (null)/3232261518 (1 total)
Oct 15 15:15:50 vm2 stonith-ng[9160]:     info: crm_get_peer: Node 3232261518 has uuid 3232261518
Oct 15 15:15:50 vm2 stonith-ng[9160]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Oct 15 15:15:50 vm2 stonith-ng[9160]:     info: init_cs_connection_once: Connection to 'corosync': established
Oct 15 15:15:50 vm2 cib[9159]:     info: startCib: CIB Initialization completed successfully
Oct 15 15:15:50 vm2 cib[9159]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Oct 15 15:15:50 vm2 attrd[9162]:     info: crm_get_peer: Created entry 8cc26550-62c8-48b4-85b1-99ce7f38b52d/0x1a26120 for node (null)/3232261518 (1 total)
Oct 15 15:15:50 vm2 attrd[9162]:     info: crm_get_peer: Node 3232261518 has uuid 3232261518
Oct 15 15:15:50 vm2 attrd[9162]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Oct 15 15:15:50 vm2 attrd[9162]:   notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232261518] - state is now member (was (null))
Oct 15 15:15:50 vm2 attrd[9162]:     info: init_cs_connection_once: Connection to 'corosync': established
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Oct 15 15:15:50 vm2 pacemakerd[9155]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_get_peer: Node 3232261517 is now known as vm1
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_get_peer: Created entry 77755654-1b91-4441-bca7-a626f29dd6dd/0x1ad00b0 for node (null)/3232261518 (1 total)
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_get_peer: Node 3232261518 has uuid 3232261518
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Oct 15 15:15:50 vm2 cib[9159]:     info: init_cs_connection_once: Connection to 'corosync': established
Oct 15 15:15:50 vm2 stonith-ng[9160]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Oct 15 15:15:50 vm2 stonith-ng[9160]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Oct 15 15:15:50 vm2 stonith-ng[9160]:     info: crm_get_peer: Node 3232261518 is now known as vm2
Oct 15 15:15:50 vm2 stonith-ng[9160]:     info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
Oct 15 15:15:50 vm2 attrd[9162]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Oct 15 15:15:50 vm2 attrd[9162]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Oct 15 15:15:50 vm2 attrd[9162]:     info: crm_get_peer: Node 3232261518 is now known as vm2
Oct 15 15:15:50 vm2 attrd[9162]:     info: main: Cluster connection active
Oct 15 15:15:50 vm2 attrd[9162]:     info: qb_ipcs_us_publish: server name: attrd
Oct 15 15:15:50 vm2 attrd[9162]:     info: main: Accepting attribute updates
Oct 15 15:15:50 vm2 attrd[9162]:     info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
Oct 15 15:15:50 vm2 cib[9159]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Oct 15 15:15:50 vm2 cib[9159]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_get_peer: Node 3232261518 is now known as vm2
Oct 15 15:15:50 vm2 cib[9159]:     info: qb_ipcs_us_publish: server name: cib_ro
Oct 15 15:15:50 vm2 cib[9159]:     info: qb_ipcs_us_publish: server name: cib_rw
Oct 15 15:15:50 vm2 cib[9159]:     info: qb_ipcs_us_publish: server name: cib_shm
Oct 15 15:15:50 vm2 cib[9159]:     info: cib_init: Starting cib mainloop
Oct 15 15:15:50 vm2 cib[9159]:     info: pcmk_cpg_membership: Joined[0.0] cib.3232261518 
Oct 15 15:15:50 vm2 cib[9159]:     info: pcmk_cpg_membership: Member[0.0] cib.3232261518 
Oct 15 15:15:50 vm2 pacemakerd[9155]:     info: crm_get_peer: Node 3232261519 is now known as vm3
Oct 15 15:15:50 vm2 cib[9165]:     info: write_cib_contents: Wrote version 0.0.0 of the CIB to disk (digest: 3930c46445d2289a49a22e68ead11aaf)
Oct 15 15:15:50 vm2 cib[9165]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.VjPStY (digest: /var/lib/pacemaker/cib/cib.Kjn880)
Oct 15 15:15:50 vm2 cib[9159]:     info: pcmk_cpg_membership: Joined[1.0] cib.3232261517 
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_get_peer: Created entry 39ce7640-a96a-4b46-a901-433bd5070c70/0x1ad30e0 for node (null)/3232261517 (2 total)
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_get_peer: Node 3232261517 has uuid 3232261517
Oct 15 15:15:50 vm2 cib[9159]:     info: pcmk_cpg_membership: Member[1.0] cib.3232261517 
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Oct 15 15:15:50 vm2 cib[9159]:     info: pcmk_cpg_membership: Member[1.1] cib.3232261518 
Oct 15 15:15:50 vm2 cib[9159]:     info: pcmk_cpg_membership: Joined[2.0] cib.3232261519 
Oct 15 15:15:50 vm2 cib[9159]:     info: pcmk_cpg_membership: Member[2.0] cib.3232261517 
Oct 15 15:15:50 vm2 cib[9159]:     info: pcmk_cpg_membership: Member[2.1] cib.3232261518 
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_get_peer: Created entry 0b2da8ff-404c-4bbe-ba5b-32bc7fe73247/0x1ad3150 for node (null)/3232261519 (3 total)
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_get_peer: Node 3232261519 has uuid 3232261519
Oct 15 15:15:50 vm2 cib[9159]:     info: pcmk_cpg_membership: Member[2.2] cib.3232261519 
Oct 15 15:15:50 vm2 cib[9159]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Oct 15 15:15:51 vm2 cib[9159]:     info: crm_client_new: Connecting 0x1ad31c0 for uid=496 gid=492 pid=9164 id=e9411326-2fb2-4d92-a3a2-c12a11547c4e
Oct 15 15:15:51 vm2 crmd[9164]:     info: do_cib_control: CIB connection established
Oct 15 15:15:51 vm2 crmd[9164]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Oct 15 15:15:51 vm2 cib[9159]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Oct 15 15:15:51 vm2 crmd[9164]:     info: crm_get_peer: Created entry 235c0fb5-3d3f-4d96-94ac-af4a60abd85a/0x2957ed0 for node (null)/3232261518 (1 total)
Oct 15 15:15:51 vm2 crmd[9164]:     info: crm_get_peer: Node 3232261518 has uuid 3232261518
Oct 15 15:15:51 vm2 crmd[9164]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261518] - corosync-cpg is now online
Oct 15 15:15:51 vm2 crmd[9164]:     info: init_cs_connection_once: Connection to 'corosync': established
Oct 15 15:15:51 vm2 crmd[9164]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Oct 15 15:15:51 vm2 crmd[9164]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Oct 15 15:15:51 vm2 crmd[9164]:     info: crm_get_peer: Node 3232261518 is now known as vm2
Oct 15 15:15:51 vm2 crmd[9164]:     info: peer_update_callback: vm2 is now (null)
Oct 15 15:15:51 vm2 crmd[9164]:   notice: cluster_connect_quorum: Quorum acquired
Oct 15 15:15:51 vm2 cib[9159]:     info: crm_client_new: Connecting 0x1b57560 for uid=0 gid=0 pid=9160 id=1c14b757-e503-499c-b445-d4c1d089aa46
Oct 15 15:15:51 vm2 stonith-ng[9160]:   notice: setup_cib: Watching for stonith topology changes
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: qb_ipcs_us_publish: server name: stonith-ng
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: main: Starting stonith-ng mainloop
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: pcmk_cpg_membership: Joined[0.0] stonith-ng.3232261518 
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: pcmk_cpg_membership: Member[0.0] stonith-ng.3232261518 
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: pcmk_cpg_membership: Joined[1.0] stonith-ng.3232261517 
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: crm_get_peer: Created entry b0481155-0a6a-4ae3-969d-4a0a5257f91a/0x15b7690 for node (null)/3232261517 (2 total)
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: crm_get_peer: Node 3232261517 has uuid 3232261517
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: pcmk_cpg_membership: Member[1.0] stonith-ng.3232261517 
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Oct 15 15:15:51 vm2 cib[9159]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Oct 15 15:15:51 vm2 crmd[9164]:     info: do_ha_control: Connected to the cluster
Oct 15 15:15:51 vm2 crmd[9164]:     info: lrmd_ipc_connect: Connecting to lrmd
Oct 15 15:15:51 vm2 cib[9159]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.0.0)
Oct 15 15:15:51 vm2 lrmd[9161]:     info: crm_client_new: Connecting 0xf45d10 for uid=496 gid=492 pid=9164 id=93dbc285-b009-4602-8cfa-85f7c9395fa1
Oct 15 15:15:51 vm2 cib[9159]:     info: crm_client_new: Connecting 0x1921be0 for uid=496 gid=492 pid=9162 id=a21bc4ed-bbe4-4406-8969-ac99d7d2680e
Oct 15 15:15:51 vm2 crmd[9164]:     info: do_lrm_control: LRM connection established
Oct 15 15:15:51 vm2 crmd[9164]:     info: do_started: Delaying start, no membership data (0000000000100000)
Oct 15 15:15:51 vm2 crmd[9164]:     info: pcmk_quorum_notification: Membership 12: quorum retained (3)
Oct 15 15:15:51 vm2 crmd[9164]:     info: crm_get_peer: Created entry 47979f84-c707-4fa1-859b-c3cd620cc8c4/0x2a9d980 for node (null)/3232261517 (2 total)
Oct 15 15:15:51 vm2 crmd[9164]:     info: crm_get_peer: Node 3232261517 has uuid 3232261517
Oct 15 15:15:51 vm2 crmd[9164]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261517
Oct 15 15:15:51 vm2 cib[9159]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.0.0)
Oct 15 15:15:51 vm2 attrd[9162]:     info: attrd_cib_connect: Connected to the CIB after 2 attempts
Oct 15 15:15:51 vm2 attrd[9162]:     info: main: CIB connection active
Oct 15 15:15:51 vm2 attrd[9162]:     info: pcmk_cpg_membership: Joined[0.0] attrd.3232261518 
Oct 15 15:15:51 vm2 attrd[9162]:     info: pcmk_cpg_membership: Member[0.0] attrd.3232261518 
Oct 15 15:15:51 vm2 attrd[9162]:     info: pcmk_cpg_membership: Joined[1.0] attrd.3232261517 
Oct 15 15:15:51 vm2 attrd[9162]:     info: crm_get_peer: Created entry 80e46a7a-18c8-4f4b-b022-037c93c051de/0x1a2bf60 for node (null)/3232261517 (2 total)
Oct 15 15:15:51 vm2 attrd[9162]:     info: crm_get_peer: Node 3232261517 has uuid 3232261517
Oct 15 15:15:51 vm2 attrd[9162]:     info: pcmk_cpg_membership: Member[1.0] attrd.3232261517 
Oct 15 15:15:51 vm2 attrd[9162]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Oct 15 15:15:51 vm2 attrd[9162]:   notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232261517] - state is now member (was (null))
Oct 15 15:15:51 vm2 attrd[9162]:     info: pcmk_cpg_membership: Member[1.1] attrd.3232261518 
Oct 15 15:15:51 vm2 attrd[9162]:     info: pcmk_cpg_membership: Joined[2.0] attrd.3232261519 
Oct 15 15:15:51 vm2 attrd[9162]:     info: pcmk_cpg_membership: Member[2.0] attrd.3232261517 
Oct 15 15:15:51 vm2 attrd[9162]:     info: pcmk_cpg_membership: Member[2.1] attrd.3232261518 
Oct 15 15:15:51 vm2 attrd[9162]:     info: crm_get_peer: Created entry 1f1257fb-9977-4a84-b92d-3c883d40443a/0x1a2bfd0 for node (null)/3232261519 (3 total)
Oct 15 15:15:51 vm2 attrd[9162]:     info: crm_get_peer: Node 3232261519 has uuid 3232261519
Oct 15 15:15:51 vm2 attrd[9162]:     info: pcmk_cpg_membership: Member[2.2] attrd.3232261519 
Oct 15 15:15:51 vm2 attrd[9162]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Oct 15 15:15:51 vm2 attrd[9162]:   notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232261519] - state is now member (was (null))
Oct 15 15:15:51 vm2 stonith-ng[9160]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Oct 15 15:15:51 vm2 stonith-ng[9160]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: pcmk_cpg_membership: Member[1.1] stonith-ng.3232261518 
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: pcmk_cpg_membership: Joined[2.0] stonith-ng.3232261519 
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: pcmk_cpg_membership: Member[2.0] stonith-ng.3232261517 
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: pcmk_cpg_membership: Member[2.1] stonith-ng.3232261518 
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: crm_get_peer: Created entry eae37db6-7b15-403f-8015-9791869d7939/0x15b5970 for node (null)/3232261519 (3 total)
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: crm_get_peer: Node 3232261519 has uuid 3232261519
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: pcmk_cpg_membership: Member[2.2] stonith-ng.3232261519 
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: init_cib_cache_cb: Updating device list from the cib: init
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: unpack_nodes: Creating a fake local node
Oct 15 15:15:51 vm2 crmd[9164]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261517
Oct 15 15:15:51 vm2 crmd[9164]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261517] - state is now member (was (null))
Oct 15 15:15:51 vm2 crmd[9164]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm2[3232261518] - state is now member (was (null))
Oct 15 15:15:51 vm2 crmd[9164]:     info: peer_update_callback: vm2 is now member (was (null))
Oct 15 15:15:51 vm2 crmd[9164]:     info: crm_get_peer: Created entry f08f8254-f59c-4749-bca3-e4b099bdcc20/0x2a9d9f0 for node (null)/3232261519 (3 total)
Oct 15 15:15:51 vm2 crmd[9164]:     info: crm_get_peer: Node 3232261519 has uuid 3232261519
Oct 15 15:15:51 vm2 crmd[9164]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261519
Oct 15 15:15:51 vm2 crmd[9164]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261519
Oct 15 15:15:51 vm2 crmd[9164]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Oct 15 15:15:51 vm2 crmd[9164]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Oct 15 15:15:51 vm2 crmd[9164]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Oct 15 15:15:51 vm2 crmd[9164]:     info: do_started: Delaying start, Config not read (0000000000000040)
Oct 15 15:15:51 vm2 crmd[9164]:     info: qb_ipcs_us_publish: server name: crmd
Oct 15 15:15:51 vm2 crmd[9164]:   notice: do_started: The local CRM is operational
Oct 15 15:15:51 vm2 crmd[9164]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Oct 15 15:15:51 vm2 crmd[9164]:   notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Oct 15 15:15:51 vm2 cib[9159]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.0.0)
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: crm_get_peer: Node 3232261519 is now known as vm3
Oct 15 15:15:51 vm2 stonith-ng[9160]:     info: crm_get_peer: Node 3232261517 is now known as vm1
Oct 15 15:15:52 vm2 crmd[9164]:     info: pcmk_cpg_membership: Joined[0.0] crmd.3232261518 
Oct 15 15:15:52 vm2 crmd[9164]:     info: pcmk_cpg_membership: Member[0.0] crmd.3232261518 
Oct 15 15:15:52 vm2 crmd[9164]:     info: pcmk_cpg_membership: Joined[1.0] crmd.3232261517 
Oct 15 15:15:52 vm2 crmd[9164]:     info: pcmk_cpg_membership: Member[1.0] crmd.3232261517 
Oct 15 15:15:52 vm2 crmd[9164]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261517] - corosync-cpg is now online
Oct 15 15:15:52 vm2 crmd[9164]:     info: pcmk_cpg_membership: Member[1.1] crmd.3232261518 
Oct 15 15:15:52 vm2 crmd[9164]:     info: pcmk_cpg_membership: Joined[2.0] crmd.3232261519 
Oct 15 15:15:52 vm2 crmd[9164]:     info: pcmk_cpg_membership: Member[2.0] crmd.3232261517 
Oct 15 15:15:52 vm2 crmd[9164]:     info: pcmk_cpg_membership: Member[2.1] crmd.3232261518 
Oct 15 15:15:52 vm2 crmd[9164]:     info: pcmk_cpg_membership: Member[2.2] crmd.3232261519 
Oct 15 15:15:52 vm2 crmd[9164]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Oct 15 15:15:52 vm2 crmd[9164]:     info: crm_get_peer: Node 3232261517 is now known as vm1
Oct 15 15:15:52 vm2 crmd[9164]:     info: peer_update_callback: vm1 is now member
Oct 15 15:15:52 vm2 crmd[9164]:     info: crm_get_peer: Node 3232261519 is now known as vm3
Oct 15 15:15:52 vm2 crmd[9164]:     info: peer_update_callback: vm3 is now member
Oct 15 15:15:53 vm2 stonith-ng[9160]:     info: crm_client_new: Connecting 0x15bb0f0 for uid=496 gid=492 pid=9164 id=6818deee-f242-4158-ba0b-cdf12be65c10
Oct 15 15:15:53 vm2 stonith-ng[9160]:     info: stonith_command: Processed register from crmd.9164: OK (0)
Oct 15 15:15:53 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_notify from crmd.9164: OK (0)
Oct 15 15:15:53 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_notify from crmd.9164: OK (0)
Oct 15 15:16:12 vm2 crmd[9164]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Oct 15 15:16:12 vm2 crmd[9164]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Oct 15 15:16:12 vm2 crmd[9164]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Oct 15 15:16:12 vm2 crmd[9164]:     info: election_count_vote: Election 1 (owner: 3232261519) lost: vote from vm3 (Uptime)
Oct 15 15:16:12 vm2 crmd[9164]:   notice: do_state_transition: State transition S_ELECTION -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Oct 15 15:16:12 vm2 crmd[9164]:     info: do_dc_release: DC role released
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.0.0)
Oct 15 15:16:12 vm2 crmd[9164]:     info: do_te_control: Transitioner is now inactive
Oct 15 15:16:12 vm2 crmd[9164]:     info: do_log: FSA: Input I_RELEASE_SUCCESS from do_dc_release() received in state S_PENDING
Oct 15 15:16:12 vm2 crmd[9164]:     info: election_count_vote: Election 1 (owner: 3232261517) lost: vote from vm1 (Uptime)
Oct 15 15:16:12 vm2 crmd[9164]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Oct 15 15:16:12 vm2 crmd[9164]:     info: election_count_vote: Election 2 (owner: 3232261517) lost: vote from vm1 (Uptime)
Oct 15 15:16:12 vm2 crmd[9164]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Oct 15 15:16:12 vm2 cib[9159]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Oct 15 15:16:12 vm2 cib[9159]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Oct 15 15:16:12 vm2 cib[9159]:     info: crm_get_peer: Node 3232261517 is now known as vm1
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section cib: OK (rc=0, origin=vm1/crmd/7, version=0.0.1)
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/9, version=0.1.1)
Oct 15 15:16:12 vm2 crmd[9164]:     info: update_dc: Set DC to vm1 (3.0.7)
Oct 15 15:16:12 vm2 crmd[9164]:     info: election_count_vote: Election 3 (owner: 3232261517) lost: vote from vm1 (Uptime)
Oct 15 15:16:12 vm2 crmd[9164]:     info: update_dc: Unset DC. Was vm1
Oct 15 15:16:12 vm2 crmd[9164]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/7, version=0.1.1)
Oct 15 15:16:12 vm2 crmd[9164]:  warning: join_query_callback: No DC for join-1
Oct 15 15:16:12 vm2 crmd[9164]:     info: update_dc: Set DC to vm1 (3.0.7)
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/11, version=0.2.1)
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/8, version=0.2.1)
Oct 15 15:16:12 vm2 cib[9174]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-0.raw
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_replace: Digest matched on replace from vm1: 534642ef8fd2ad2ffdb8dd568d6f7c3a
Oct 15 15:16:12 vm2 crmd[9164]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm2']/transient_attributes
Oct 15 15:16:12 vm2 crmd[9164]:     info: update_attrd_helper: Connecting to attrd... 5 retries remaining
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_replace: Replaced 0.2.1 with 0.2.1 from vm1
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=vm1/crmd/17, version=0.2.1)
Oct 15 15:16:12 vm2 attrd[9162]:     info: crm_client_new: Connecting 0x1a29460 for uid=496 gid=492 pid=9164 id=d1ea2815-a17d-44fd-b7e9-5b4c705071ce
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='vm2']/transient_attributes to master (origin=local/crmd/9)
Oct 15 15:16:12 vm2 crmd[9164]:     info: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Oct 15 15:16:12 vm2 crmd[9164]:   notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Oct 15 15:16:12 vm2 attrd[9162]:     info: attrd_client_message: Starting an election to determine the writer
Oct 15 15:16:12 vm2 cib[9174]:     info: write_cib_contents: Wrote version 0.1.0 of the CIB to disk (digest: 51521f2153f57cb386e10b5d3317b80b)
Oct 15 15:16:12 vm2 cib[9174]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.XCNilc (digest: /var/lib/pacemaker/cib/cib.uFJDvd)
Oct 15 15:16:12 vm2 cib[9159]:     info: crm_get_peer: Node 3232261519 is now known as vm3
Oct 15 15:16:12 vm2 attrd[9162]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261518
Oct 15 15:16:12 vm2 attrd[9162]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Oct 15 15:16:12 vm2 attrd[9162]:     info: attrd_client_message: Broadcasting terminate[vm2] = (null)
Oct 15 15:16:12 vm2 attrd[9162]:     info: attrd_client_message: Broadcasting shutdown[vm2] = (null)
Oct 15 15:16:12 vm2 cib[9175]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-1.raw
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/18, version=0.3.1)
Oct 15 15:16:12 vm2 attrd[9162]:     info: crm_get_peer: Node 3232261519 is now known as vm3
Oct 15 15:16:12 vm2 attrd[9162]:     info: election_count_vote: Election 1 (owner: 3232261519) pass: vote from vm3 (Host name)
Oct 15 15:16:12 vm2 cib[9175]:     info: write_cib_contents: Wrote version 0.2.0 of the CIB to disk (digest: de05f774f8d9c411f636e16f725bf956)
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/19, version=0.4.1)
Oct 15 15:16:12 vm2 cib[9175]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.uLJote (digest: /var/lib/pacemaker/cib/cib.P8plJf)
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=vm1/crmd/20, version=0.5.1)
Oct 15 15:16:12 vm2 attrd[9162]:     info: crm_get_peer: Node 3232261517 is now known as vm1
Oct 15 15:16:12 vm2 attrd[9162]:     info: election_count_vote: Election 1 (owner: 3232261517) lost: vote from vm1 (Uptime)
Oct 15 15:16:12 vm2 attrd[9162]:     info: election_count_vote: Election 2 (owner: 3232261517) lost: vote from vm1 (Uptime)
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/23, version=0.5.2)
Oct 15 15:16:12 vm2 cib[9176]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-2.raw
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/25, version=0.5.3)
Oct 15 15:16:12 vm2 attrd[9162]:     info: election_count_vote: Election 3 (owner: 3232261517) lost: vote from vm1 (Uptime)
Oct 15 15:16:12 vm2 attrd[9162]:     info: election_count_vote: Election 4 (owner: 3232261517) lost: vote from vm1 (Uptime)
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/27, version=0.5.4)
Oct 15 15:16:12 vm2 attrd[9162]:   notice: attrd_peer_message: Processing sync-response from vm1
Oct 15 15:16:12 vm2 cib[9176]:     info: write_cib_contents: Wrote version 0.5.0 of the CIB to disk (digest: 034de6ab489c499a76b66b8122cf302a)
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section cib: OK (rc=0, origin=vm1/crmd/30, version=0.5.5)
Oct 15 15:16:12 vm2 cib[9176]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.e6aqrh (digest: /var/lib/pacemaker/cib/cib.RB6GMi)
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/2, version=0.5.6)
Oct 15 15:16:12 vm2 attrd[9162]:     info: attrd_client_message: Broadcasting probe_complete[vm2] = true
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/4, version=0.5.7)
Oct 15 15:16:12 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/5, version=0.5.8)
Oct 15 15:16:21 vm2 crmd[9164]:     info: election_count_vote: Election 4 (owner: 3232261517) lost: vote from vm1 (Uptime)
Oct 15 15:16:21 vm2 crmd[9164]:     info: update_dc: Unset DC. Was vm1
Oct 15 15:16:21 vm2 crmd[9164]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Oct 15 15:16:21 vm2 crmd[9164]:   notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/10, version=0.5.8)
Oct 15 15:16:21 vm2 stonith-ng[9160]:     info: stonith_level_remove: Node vm3 not found (0 active entries)
Oct 15 15:16:21 vm2 stonith-ng[9160]:     info: stonith_level_register: Node vm3 has 1 active fencing levels
Oct 15 15:16:21 vm2 stonith-ng[9160]:     info: stonith_level_register: Node vm3 has 2 active fencing levels
Oct 15 15:16:21 vm2 stonith-ng[9160]:     info: update_cib_stonith_devices: Updating device list from the cib: new resource
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section 'all': OK (rc=0, origin=vm1/cibadmin/2, version=0.6.1)
Oct 15 15:16:21 vm2 stonith-ng[9160]:  warning: handle_startup_fencing: Blind faith: not fencing unseen nodes
Oct 15 15:16:21 vm2 stonith-ng[9160]:     info: cib_device_update: Device f1 is allowed on vm2: score=0
Oct 15 15:16:21 vm2 stonith-ng[9160]:     info: stonith_action_create: Initiating action metadata for agent fence_legacy (target=(null))
Oct 15 15:16:21 vm2 cib[9177]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-3.raw
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/39, version=0.7.1)
Oct 15 15:16:21 vm2 crmd[9164]:     info: update_dc: Set DC to vm1 (3.0.7)
Oct 15 15:16:21 vm2 crmd[9164]:     info: election_count_vote: Election 5 (owner: 3232261517) lost: vote from vm1 (Uptime)
Oct 15 15:16:21 vm2 crmd[9164]:     info: update_dc: Unset DC. Was vm1
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/11, version=0.7.1)
Oct 15 15:16:21 vm2 crmd[9164]:     info: update_dc: Set DC to vm1 (3.0.7)
Oct 15 15:16:21 vm2 crmd[9164]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/12, version=0.7.1)
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=vm1/crmd/41, version=0.8.1)
Oct 15 15:16:21 vm2 cib[9177]:     info: write_cib_contents: Wrote version 0.6.0 of the CIB to disk (digest: 45e2cff77ab7d1f709fb4b3254f8e13b)
Oct 15 15:16:21 vm2 cib[9177]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.crsAcE (digest: /var/lib/pacemaker/cib/cib.h6A973)
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_replace: Digest matched on replace from vm1: b72b19965fead91bc6f322209ac3483d
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_replace: Replaced 0.8.1 with 0.8.1 from vm1
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=vm1/crmd/47, version=0.8.1)
Oct 15 15:16:21 vm2 cib[9179]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-4.raw
Oct 15 15:16:21 vm2 crmd[9164]:     info: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Oct 15 15:16:21 vm2 crmd[9164]:   notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Oct 15 15:16:21 vm2 cib[9179]:     info: write_cib_contents: Wrote version 0.8.0 of the CIB to disk (digest: 560938b85dddf6a8da1def7aa23e6520)
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section //node_state[@uname='vm3']/lrm: OK (rc=0, origin=vm1/crmd/51, version=0.8.2)
Oct 15 15:16:21 vm2 cib[9179]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.BbJDdG (digest: /var/lib/pacemaker/cib/cib.bKGjf6)
Oct 15 15:16:21 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/52, version=0.8.3)
Oct 15 15:16:22 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=vm1/crmd/53, version=0.8.4)
Oct 15 15:16:22 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/54, version=0.8.5)
Oct 15 15:16:22 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section //node_state[@uname='vm2']/lrm: OK (rc=0, origin=vm1/crmd/55, version=0.8.6)
Oct 15 15:16:22 vm2 cib[9180]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-5.raw
Oct 15 15:16:22 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/56, version=0.8.7)
Oct 15 15:16:22 vm2 cib[9180]:     info: write_cib_contents: Wrote version 0.8.0 of the CIB to disk (digest: b0ac20e1d3cc6eb5c3645ab477efa91f)
Oct 15 15:16:22 vm2 cib[9180]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.YUhMlC (digest: /var/lib/pacemaker/cib/cib.ykdmt2)
Oct 15 15:16:22 vm2 stonith-ng[9160]:   notice: stonith_device_register: Added 'f1' to the device list (1 active devices)
Oct 15 15:16:22 vm2 stonith-ng[9160]:     info: cib_device_update: Device f2 is allowed on vm2: score=0
Oct 15 15:16:22 vm2 stonith-ng[9160]:     info: stonith_action_create: Initiating action metadata for agent fence_legacy (target=(null))
Oct 15 15:16:23 vm2 stonith-ng[9160]:   notice: stonith_device_register: Added 'f2' to the device list (2 active devices)
Oct 15 15:16:24 vm2 lrmd[9161]:     info: process_lrmd_get_rsc_info: Resource 'pDummy' not found (0 active resources)
Oct 15 15:16:24 vm2 lrmd[9161]:     info: process_lrmd_rsc_register: Added 'pDummy' to the rsc list (1 active resources)
Oct 15 15:16:24 vm2 crmd[9164]:     info: do_lrm_rsc_op: Performing key=8:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed op=pDummy_monitor_0
Oct 15 15:16:24 vm2 lrmd[9161]:     info: process_lrmd_get_rsc_info: Resource 'f1' not found (1 active resources)
Oct 15 15:16:24 vm2 lrmd[9161]:     info: process_lrmd_rsc_register: Added 'f1' to the rsc list (2 active resources)
Oct 15 15:16:24 vm2 crmd[9164]:     info: do_lrm_rsc_op: Performing key=9:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed op=f1_monitor_0
Oct 15 15:16:24 vm2 stonith-ng[9160]:     info: crm_client_new: Connecting 0x1604c30 for uid=0 gid=0 pid=9161 id=e2934c5f-55c7-4b9c-8f94-f26c1faabe8d
Oct 15 15:16:24 vm2 stonith-ng[9160]:     info: stonith_command: Processed register from lrmd.9161: OK (0)
Oct 15 15:16:24 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_notify from lrmd.9161: OK (0)
Oct 15 15:16:24 vm2 lrmd[9161]:     info: process_lrmd_get_rsc_info: Resource 'f2' not found (2 active resources)
Oct 15 15:16:24 vm2 lrmd[9161]:     info: process_lrmd_rsc_register: Added 'f2' to the rsc list (3 active resources)
Oct 15 15:16:24 vm2 crmd[9164]:     info: do_lrm_rsc_op: Performing key=10:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed op=f2_monitor_0
Oct 15 15:16:24 vm2 Dummy(pDummy)[9182]: DEBUG: pDummy monitor : 7
Oct 15 15:16:24 vm2 crmd[9164]:     info: process_lrm_event: LRM operation f1_monitor_0 (call=9, rc=7, cib-update=13, confirmed=true) not running
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/13)
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/13, version=0.8.8)
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/13, version=0.8.9)
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/61, version=0.8.10)
Oct 15 15:16:24 vm2 crmd[9164]:     info: process_lrm_event: LRM operation f2_monitor_0 (call=13, rc=7, cib-update=14, confirmed=true) not running
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/14)
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/14, version=0.8.11)
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/14, version=0.8.12)
Oct 15 15:16:24 vm2 crmd[9164]:     info: services_os_action_execute: Managed Dummy_meta-data_0 process 9216 exited with rc=0
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/15)
Oct 15 15:16:24 vm2 crmd[9164]:   notice: process_lrm_event: LRM operation pDummy_monitor_0 (call=5, rc=7, cib-update=15, confirmed=true) not running
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/15, version=0.8.13)
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/15, version=0.8.14)
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/62, version=0.8.15)
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/63, version=0.8.16)
Oct 15 15:16:24 vm2 attrd[9162]:     info: attrd_client_message: Broadcasting probe_complete[vm2] = true
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/16, version=0.8.17)
Oct 15 15:16:24 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/17, version=0.8.18)
Oct 15 15:16:26 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/64, version=0.8.19)
Oct 15 15:16:27 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/65, version=0.8.20)
Oct 15 15:16:28 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/66, version=0.8.21)
Oct 15 15:16:29 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/67, version=0.8.22)
Oct 15 15:17:04 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/crmd/18, version=0.8.23)
Oct 15 15:17:04 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/9, version=0.8.24)
Oct 15 15:17:04 vm2 cib[9159]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=vm1/attrd/10, version=0.8.25)
Oct 15 15:17:06 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_query from vm1: OK (0)
Oct 15 15:17:06 vm2 stonith-ng[9160]:     info: stonith_action_create: Initiating action list for agent fence_legacy (target=(null))
Oct 15 15:17:06 vm2 stonith-ng[9160]:     info: stonith_action_create: Initiating action list for agent fence_legacy (target=(null))
Oct 15 15:17:06 vm2 stonith-ng[9160]:     info: dynamic_list_search_cb: Refreshing port list for f2
Oct 15 15:17:06 vm2 stonith-ng[9160]:     info: dynamic_list_search_cb: Refreshing port list for f1
Oct 15 15:17:06 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_fence from vm1: Operation now in progress (-115)
Oct 15 15:17:06 vm2 stonith-ng[9160]:     info: stonith_action_create: Initiating action reboot for agent fence_legacy (target=vm3)
Oct 15 15:17:10 vm2 stonith: [9244]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Oct 15 15:17:10 vm2 stonith-ng[9160]:     info: internal_stonith_action_execute: Attempt 2 to execute fence_legacy (reboot). remaining timeout is 56
Oct 15 15:17:16 vm2 stonith: [9280]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Oct 15 15:17:16 vm2 stonith-ng[9160]:     info: update_remaining_timeout: Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Oct 15 15:17:16 vm2 stonith-ng[9160]:    error: log_operation: Operation 'reboot' [9273] (call 2 from crmd.14874) for host 'vm3' with device 'f1' returned: -201 (Generic Pacemaker error)
Oct 15 15:17:16 vm2 stonith-ng[9160]:  warning: log_operation: f1:9273 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Oct 15 15:17:16 vm2 stonith-ng[9160]:  warning: log_operation: f1:9273 [ failed: vm3 5 ]
Oct 15 15:17:35 vm2 stonith-ng[9160]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.14874@vm1.c9b3e4f1: Generic Pacemaker error
Oct 15 15:17:35 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Oct 15 15:17:35 vm2 crmd[9164]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=c9b3e4f1-269f-48a8-ba27-c7573dead8e2) by client crmd.14874
Oct 15 15:17:37 vm2 stonith-ng[9160]:   notice: can_fence_host_with_device: f1 can fence vm3: dynamic-list
Oct 15 15:17:37 vm2 stonith-ng[9160]:   notice: can_fence_host_with_device: f2 can fence vm3: dynamic-list
Oct 15 15:17:37 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_query from vm1: OK (0)
Oct 15 15:17:37 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_fence from vm1: Operation now in progress (-115)
Oct 15 15:17:37 vm2 stonith-ng[9160]:     info: stonith_action_create: Initiating action reboot for agent fence_legacy (target=vm3)
Oct 15 15:17:41 vm2 stonith: [9578]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Oct 15 15:17:41 vm2 stonith-ng[9160]:     info: internal_stonith_action_execute: Attempt 2 to execute fence_legacy (reboot). remaining timeout is 56
Oct 15 15:17:46 vm2 stonith: [9589]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Oct 15 15:17:46 vm2 stonith-ng[9160]:     info: update_remaining_timeout: Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Oct 15 15:17:46 vm2 stonith-ng[9160]:    error: log_operation: Operation 'reboot' [9588] (call 3 from crmd.14874) for host 'vm3' with device 'f1' returned: -201 (Generic Pacemaker error)
Oct 15 15:17:46 vm2 stonith-ng[9160]:  warning: log_operation: f1:9588 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Oct 15 15:17:46 vm2 stonith-ng[9160]:  warning: log_operation: f1:9588 [ failed: vm3 5 ]
Oct 15 15:18:06 vm2 stonith-ng[9160]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.14874@vm1.d5bd243d: Generic Pacemaker error
Oct 15 15:18:06 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Oct 15 15:18:06 vm2 crmd[9164]:   notice: tengine_stonith_notify: Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=d5bd243d-da15-4098-80ab-c9f1bce3827f) by client crmd.14874
Oct 15 15:18:08 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_query from vm1: OK (0)
Oct 15 15:18:08 vm2 stonith-ng[9160]:     info: stonith_action_create: Initiating action list for agent fence_legacy (target=(null))
Oct 15 15:18:08 vm2 stonith-ng[9160]:     info: stonith_action_create: Initiating action list for agent fence_legacy (target=(null))
Oct 15 15:18:08 vm2 stonith-ng[9160]:     info: dynamic_list_search_cb: Refreshing port list for f2
Oct 15 15:18:08 vm2 stonith-ng[9160]:     info: dynamic_list_search_cb: Refreshing port list for f1
Oct 15 15:18:08 vm2 stonith-ng[9160]:     info: stonith_command: Processed st_fence from vm1: Operation now in progress (-115)
Oct 15 15:18:08 vm2 stonith-ng[9160]:     info: stonith_action_create: Initiating action reboot for agent fence_legacy (target=vm3)
Oct 15 15:18:12 vm2 stonith: [9623]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Oct 15 15:18:12 vm2 stonith-ng[9160]:     info: internal_stonith_action_execute: Attempt 2 to execute fence_legacy (reboot). remaining timeout is 56
Oct 15 15:18:17 vm2 stonith: [9640]: CRIT: external_reset_req: 'libvirt reset' for host vm3 failed with rc 1
Oct 15 15:18:17 vm2 stonith-ng[9160]:     info: update_remaining_timeout: Attempted to execute agent fence_legacy (reboot) the maximum number of times (2) allowed
Oct 15 15:18:17 vm2 stonith-ng[9160]:    error: log_operation: Operation 'reboot' [9639] (call 4 from crmd.14874) for host 'vm3' with device 'f1' returned: -201 (Generic Pacemaker error)
Oct 15 15:18:17 vm2 stonith-ng[9160]:  warning: log_operation: f1:9639 [ Performing: stonith -t external/libvirt -T reset vm3 ]
Oct 15 15:18:17 vm2 stonith-ng[9160]:  warning: log_operation: f1:9639 [ failed: vm3 5 ]
Oct 15 15:18:36 vm2 stonith-ng[9160]:   notice: remote_op_done: Operation reboot of vm3 by vm1 for crmd.14874@vm1.61dd2280: Generic Pacemaker error
