[Pacemaker] Help needed crm did not wirte the resources

Winkler Norbert norbert.winkler at karmel.at
Tue Mar 16 13:38:28 EDT 2010


Norbert Winkler schrieb:
> Here are the logs
> 1: corosync log
> 2. /warn log
> 3 messages log
> ----------------------------------------------------------------------------------------------------------------------------------------
> 1.
>      Mar 17 07:05:25 corosync [MAIN  ] Corosync Cluster Engine 
> ('1.2.0'): started and ready to provide service.
> Mar 16 07:05:25 corosync [MAIN  ] Corosync built-in features: nss rdma
> Mar 16 07:05:25 corosync [MAIN  ] Successfully read main configuration 
> file '/etc/corosync/corosync.conf'.
> Mar 16 07:05:25 corosync [TOTEM ] Initializing transport (UDP/IP).
> Mar 16 07:05:25 corosync [TOTEM ] Initializing transmit/receive 
> security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
> Mar 16 07:05:25 corosync [MAIN  ] Compatibility mode set to 
> whitetank.  Using V1 and V2 of the synchronization engine.
> Mar 16 07:05:25 corosync [TOTEM ] The network interface [192.168.1.21] 
> is now up.
> Mar 16 07:05:25 corosync [pcmk  ] info: process_ais_conf: Reading 
> configure
> Mar 16 07:05:25 corosync [pcmk  ] info: config_find_init: Local 
> handle: 5650605097994944514 for logging
> Mar 16 07:05:25 corosync [pcmk  ] info: config_find_next: Processing 
> additional logging options...
> Mar 16 07:05:25 corosync [pcmk  ] info: get_config_opt: Found 'off' 
> for option: debug
> Mar 16 07:05:25 corosync [pcmk  ] info: get_config_opt: Defaulting to 
> 'off' for option: to_file
> Mar 16 07:05:25 corosync [pcmk  ] info: get_config_opt: Defaulting to 
> 'daemon' for option: syslog_facility
> Mar 16 07:05:25 corosync [pcmk  ] info: config_find_init: Local 
> handle: 2730409743423111171 for service
> Mar 16 07:05:25 corosync [pcmk  ] info: config_find_next: Processing 
> additional service options...
> Mar 16 07:05:25 corosync [pcmk  ] info: get_config_opt: Defaulting to 
> 'pcmk' for option: clustername
> Mar 16 07:05:25 corosync [pcmk  ] info: get_config_opt: Found 'yes' 
> for option: use_logd
> Mar 16 07:05:25 corosync [pcmk  ] info: get_config_opt: Found 'yes' 
> for option: use_mgmtd
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_startup: CRM: Initialized
> Mar 16 07:05:25 corosync [pcmk  ] Logging: Initialized pcmk_startup
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_startup: Maximum core 
> file size is: 18446744073709551615
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_startup: Service: 9
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_startup: Local hostname: 
> vmstore1
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_update_nodeid: Local node 
> id: 352430272
> Mar 16 07:05:25 corosync [pcmk  ] info: update_member: Creating entry 
> for node 352430272 born on 0
> Mar 16 07:05:25 corosync [pcmk  ] info: update_member: 0x63d2a0 Node 
> 352430272 now known as vmstore1 (was: (null))
> Mar 16 07:05:25 corosync [pcmk  ] info: update_member: Node vmstore1 
> now has 1 quorum votes (was 0)
> Mar 16 07:05:25 corosync [pcmk  ] info: update_member: Node 
> 352430272/vmstore1 is now: member
> Mar 16 07:05:25 corosync [pcmk  ] info: spawn_child: Forked child 
> 21696 for process stonithd
> Mar 16 07:05:25 corosync [pcmk  ] info: spawn_child: Forked child 
> 21697 for process cib
> Mar 16 07:05:25 corosync [pcmk  ] info: spawn_child: Forked child 
> 21698 for process lrmd
> Mar 16 07:05:25 corosync [pcmk  ] info: spawn_child: Forked child 
> 21700 for process attrd
> Mar 16 07:05:25 corosync [pcmk  ] info: spawn_child: Forked child 
> 21701 for process pengine
> Mar 16 07:05:25 corosync [pcmk  ] info: spawn_child: Forked child 
> 21702 for process crmd
> Mar 16 07:05:25 corosync [pcmk  ] info: spawn_child: Forked child 
> 21703 for process mgmtd
> Mar 16 07:05:25 corosync [SERV  ] Service engine loaded: Pacemaker 
> Cluster Manager 1.0.7
> Mar 16 07:05:25 corosync [SERV  ] Service engine loaded: corosync 
> extended virtual synchrony service
> Mar 16 07:05:25 corosync [SERV  ] Service engine loaded: corosync 
> configuration service
> Mar 16 07:05:25 corosync [SERV  ] Service engine loaded: corosync 
> cluster closed process group service v1.01
> Mar 16 07:05:25 corosync [SERV  ] Service engine loaded: corosync 
> cluster config database access v1.01
> Mar 16 07:05:25 corosync [SERV  ] Service engine loaded: corosync 
> profile loading service
> Mar 16 07:05:25 corosync [SERV  ] Service engine loaded: corosync 
> cluster quorum service v0.1
> Mar 16 07:05:25 corosync [pcmk  ] notice: pcmk_peer_update: 
> Transitional membership event on ring 11508: memb=0, new=0, lost=0
> Mar 16 07:05:25 corosync [pcmk  ] notice: pcmk_peer_update: Stable 
> membership event on ring 11508: memb=1, new=1, lost=0
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_peer_update: NEW:  
> vmstore1 352430272
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_peer_update: MEMB: 
> vmstore1 352430272
> Mar 16 07:05:25 corosync [pcmk  ] info: update_member: Node vmstore1 
> now has process list: 00000000000000000000000000053312 (340754)
> Mar 16 07:05:25 corosync [TOTEM ] A processor joined or left the 
> membership and a new membership was formed.
> Mar 16 07:05:25 corosync [MAIN  ] Completed service synchronization, 
> ready to provide service.
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 
> 0x6744e0 for stonithd/21696
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 
> 0x676380 for attrd/21700
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 
> 0x676ff0 for cib/21697
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_ipc: Sending membership 
> update 11508 to cib
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 
> 0x677ce0 for crmd/21702
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_ipc: Sending membership 
> update 11508 to crmd
> Mar 16 07:05:25 corosync [pcmk  ] info: update_expected_votes: 
> Expected quorum votes 1024 -> 2
> Mar 16 07:05:25 corosync [pcmk  ] notice: pcmk_peer_update: 
> Transitional membership event on ring 11512: memb=1, new=0, lost=0
> Mar 16 07:05:25 corosync [pcmk  ] info: pcmk_peer_update: memb: 
> vmstore1 352430272
>    
> ----------------------------------------------------------------------------------------
> 2 warn log
> ----
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: WARN: send_ipc_message: IPC 
> Channel to 15745 is not connected
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: WARN: 
> send_via_callback_channel: Delivery of reply to client 
> 15745/5a3d0106-376a-400b-b3b1-6fe2b6e96fac failed
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: WARN: do_local_notify: A-Sync 
> reply to crmd failed: reply failed
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [MAIN ] Corosync Cluster 
> Engine exiting with status -1 at main.c:158.
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: WARN: cib_diff_notify: 
> Local-only Change (client:crmd, call: 114): 0.0.0 (Application of an 
> update diff failed, requesting a full refresh)
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: WARN: cib_server_process_diff: 
> Not applying diff 0.101.2 -> 0.101.3 (sync in progress)
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: WARN: cib_server_process_diff: 
> Not applying diff 0.101.3 -> 0.101.4 (sync in progress)
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: WARN: cib_server_process_diff: 
> Not applying diff 0.101.4 -> 0.102.1 (sync in progress)
>
> -----------------------------------------------------------------------------------------------------------------------------------------------------------
> 3 message log
>
> Mar 16 07:02:12 vmstore1 cibadmin: [21660]: info: Invoked: cibadmin -C 
> -o resources -x resourceIP.xml
>
> Mar 16 07:02:38 vmstore1 cibadmin: [21662]: info: Invoked: cibadmin -Q
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [SERV ] Unloading all 
> Corosync service engines.
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: 
> pcmk_shutdown: Shuting down Pacemaker
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: stop_child: 
> Sent -15 to mgmtd: [15746]
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: 
> pcmk_shutdown: mgmtd confirmed stopped
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: stop_child: 
> Sent -15 to crmd: [15745]
>
> Mar 16 07:05:24 vmstore1 mgmtd: [15746]: info: mgmtd is shutting down
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: crm_signal_dispatch: 
> Invoking handler for signal 15: Terminated
>
> Mar 16 07:05:24 vmstore1 mgmtd: [15746]: debug: [mgmtd] stopped
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: crm_shutdown: Requesting 
> shutdown
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: do_shutdown_req: Sending 
> shutdown request to DC: vmstore2
>
> Mar 16 07:05:24 vmstore1 attrd: [15741]: info: attrd_ais_dispatch: 
> Update relayed from vmstore2
>
> Mar 16 07:05:24 vmstore1 attrd: [15741]: info: attrd_trigger_update: 
> Sending flush op to all hosts for: shutdown (1268719524)
>
> Mar 16 07:05:24 vmstore1 attrd: [15741]: info: attrd_perform_update: 
> Sent update 11: shutdown=1268719524
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: 
> pcmk_shutdown: crmd confirmed stopped
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: stop_child: 
> Sent -15 to pengine: [15742]
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: 
> pcmk_shutdown: pengine confirmed stopped
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: stop_child: 
> Sent -15 to attrd: [15741]
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: 
> pcmk_shutdown: attrd confirmed stopped
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: stop_child: 
> Sent -15 to lrmd: [15740]
>
> Mar 16 07:05:24 vmstore1 pengine: [15742]: info: crm_signal_dispatch: 
> Invoking handler for signal 15: Terminated
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: 
> pcmk_shutdown: lrmd confirmed stopped
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: stop_child: 
> Sent -15 to cib: [15739]
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: WARN: send_ipc_message: IPC 
> Channel to 15745 is not connected
>
> Mar 16 07:05:24 vmstore1 attrd: [15741]: info: crm_signal_dispatch: 
> Invoking handler for signal 15: Terminated
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: handle_request: Shutting 
> down
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: WARN: 
> send_via_callback_channel: Delivery of reply to client 
> 15745/5a3d0106-376a-400b-b3b1-6fe2b6e96fac failed
>
> Mar 16 07:05:24 vmstore1 attrd: [15741]: info: attrd_shutdown: Exiting
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: do_state_transition: 
> State transition S_NOT_DC -> S_STOPPING [ input=I_STOP 
> cause=C_HA_MESSAGE origin=route_message ]
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: WARN: do_local_notify: A-Sync 
> reply to crmd failed: reply failed
>
> Mar 16 07:05:24 vmstore1 attrd: [15741]: info: main: Exiting...
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: do_lrm_control: 
> Disconnected from the LRM
>
> Mar 16 07:05:24 vmstore1 attrd: [15741]: info: 
> attrd_cib_connection_destroy: Connection to the CIB terminated...
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: 
> pcmk_shutdown: cib confirmed stopped
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: do_ha_control: 
> Disconnected from OpenAIS
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: stop_child: 
> Sent -15 to stonithd: [15738]
>
> Mar 16 07:05:24 vmstore1 lrmd: [15740]: info: lrmd is shutting down
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] info: pcmk_ipc_exit: 
> Client stonithd (conn=0x674240, async-conn=0x674240) left
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: do_cib_control: 
> Disconnecting CIB
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: 
> crmd_cib_connection_destroy: Connection to the CIB terminated...
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: do_exit: Performing 
> A_EXIT_0 - gracefully exiting the CRMd
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: free_mem: Dropping 
> I_TERMINATE: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_stop ]
>
> Mar 16 07:05:24 vmstore1 crmd: [15745]: info: do_exit: [crmd] stopped (0)
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: info: crm_signal_dispatch: 
> Invoking handler for signal 15: Terminated
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: info: cib_shutdown: 
> Disconnected 0 clients
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: info: cib_process_disconnect: 
> All clients disconnected...
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: info: initiate_exit: Sending 
> disconnect notification to 2 peers...
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: info: cib_process_shutdown_req: 
> Shutdown ACK from vmstore2
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: info: 
> cib_ha_connection_destroy: Heartbeat disconnection complete... exiting
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: info: 
> cib_ha_connection_destroy: Exiting...
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: info: cib_process_request: 
> Operation complete: op cib_shutdown_req for section 'all' 
> (origin=vmstore2/vmstore2/(null), version=0.0.0): ok (rc=0)
>
> Mar 16 07:05:24 vmstore1 cib: [15739]: info: main: Done
>
> Mar 16 07:05:24 vmstore1 stonithd: [15738]: notice: 
> /usr/lib64/heartbeat/stonithd normally quit.
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: 
> pcmk_shutdown: stonithd confirmed stopped
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] info: update_member: 
> Node vmstore1 now has process list: 00000000000000000000000000000002 (2)
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [pcmk ] notice: 
> pcmk_shutdown: Shutdown complete
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [SERV ] Service engine 
> unloaded: Pacemaker Cluster Manager 1.0.7
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [SERV ] Service engine 
> unloaded: corosync extended virtual synchrony service
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [SERV ] Service engine 
> unloaded: corosync configuration service
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [SERV ] Service engine 
> unloaded: corosync cluster closed process group service v1.01
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [SERV ] Service engine 
> unloaded: corosync cluster config database access v1.01
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [SERV ] Service engine 
> unloaded: corosync profile loading service
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [SERV ] Service engine 
> unloaded: corosync cluster quorum service v0.1
>
> Mar 16 07:05:24 vmstore1 corosync[15730]: [MAIN ] Corosync Cluster 
> Engine exiting with status -1 at main.c:158.
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [MAIN ] Corosync Cluster 
> Engine ('1.2.0'): started and ready to provide service.
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [MAIN ] Corosync built-in 
> features: nss rdma
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [MAIN ] Successfully read 
> main configuration file '/etc/corosync/corosync.conf'.
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [TOTEM ] Initializing 
> transport (UDP/IP).
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [TOTEM ] Initializing 
> transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [MAIN ] Compatibility mode 
> set to whitetank. Using V1 and V2 of the synchronization engine.
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [TOTEM ] The network 
> interface [192.168.1.21] is now up.
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> process_ais_conf: Reading configure
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> config_find_init: Local handle: 5650605097994944514 for logging
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> config_find_next: Processing additional logging options...
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> get_config_opt: Found 'off' for option: debug
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> get_config_opt: Defaulting to 'off' for option: to_file
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> get_config_opt: Defaulting to 'daemon' for option: syslog_facility
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> config_find_init: Local handle: 2730409743423111171 for service
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> config_find_next: Processing additional service options...
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> get_config_opt: Defaulting to 'pcmk' for option: clustername
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> get_config_opt: Found 'yes' for option: use_logd
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> get_config_opt: Found 'yes' for option: use_mgmtd
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: pcmk_startup: 
> CRM: Initialized
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] Logging: Initialized 
> pcmk_startup
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: pcmk_startup: 
> Maximum core file size is: 18446744073709551615
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: pcmk_startup: 
> Service: 9
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: pcmk_startup: 
> Local hostname: vmstore1
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> pcmk_update_nodeid: Local node id: 352430272
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> Creating entry for node 352430272 born on 0
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> 0x63d2a0 Node 352430272 now known as vmstore1 (was: (null))
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> Node vmstore1 now has 1 quorum votes (was 0)
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> Node 352430272/vmstore1 is now: member
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: spawn_child: 
> Forked child 21696 for process stonithd
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: spawn_child: 
> Forked child 21697 for process cib
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: spawn_child: 
> Forked child 21698 for process lrmd
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 10
>
> Mar 16 07:05:25 vmstore1 lrmd: [21698]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 15
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 12
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: info: Stack hogger failed 
> 0xffffffff
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: info: crm_cluster_connect: 
> Connecting to OpenAIS
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: info: init_ais_connection: 
> Creating connection to our AIS plugin
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: spawn_child: 
> Forked child 21700 for process attrd
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: Invoked: 
> /usr/lib64/heartbeat/cib
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: 
> G_main_add_TriggerHandler: Added signal manual handler
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: G_main_add_SignalHandler: 
> Added signal handler for signal 17
>
> Mar 16 07:05:25 vmstore1 lrmd: [21698]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 17
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: spawn_child: 
> Forked child 21701 for process pengine
>
> Mar 16 07:05:25 vmstore1 pengine: [21701]: info: Invoked: 
> /usr/lib64/heartbeat/pengine
>
> Mar 16 07:05:25 vmstore1 pengine: [21701]: info: main: Starting pengine
>
> Mar 16 07:05:25 vmstore1 lrmd: [21698]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 10
>
> Mar 16 07:05:25 vmstore1 lrmd: [21698]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 12
>
> Mar 16 07:05:25 vmstore1 lrmd: [21698]: info: Started.
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: spawn_child: 
> Forked child 21702 for process crmd
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: spawn_child: 
> Forked child 21703 for process mgmtd
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [SERV ] Service engine 
> loaded: Pacemaker Cluster Manager 1.0.7
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [SERV ] Service engine 
> loaded: corosync extended virtual synchrony service
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [SERV ] Service engine 
> loaded: corosync configuration service
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [SERV ] Service engine 
> loaded: corosync cluster closed process group service v1.01
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [SERV ] Service engine 
> loaded: corosync cluster config database access v1.01
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [SERV ] Service engine 
> loaded: corosync profile loading service
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [SERV ] Service engine 
> loaded: corosync cluster quorum service v0.1
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] notice: 
> pcmk_peer_update: Transitional membership event on ring 11508: memb=0, 
> new=0, lost=0
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] notice: 
> pcmk_peer_update: Stable membership event on ring 11508: memb=1, 
> new=1, lost=0
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> pcmk_peer_update: NEW: vmstore1 352430272
>
> Mar 16 07:05:25 vmstore1 mgmtd: [21703]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 15
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> pcmk_peer_update: MEMB: vmstore1 352430272
>
> Mar 16 07:05:25 vmstore1 mgmtd: [21703]: debug: Enabling coredumps
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> Node vmstore1 now has process list: 00000000000000000000000000053312 
> (340754)
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: info: init_ais_connection: 
> AIS connection established
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [TOTEM ] A processor joined 
> or left the membership and a new membership was formed.
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: info: get_ais_nodeid: 
> Server details: id=352430272 uname=vmstore1 cname=pcmk
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [MAIN ] Completed service 
> synchronization, ready to provide service.
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: info: crm_new_peer: Node 
> vmstore1 now has id: 352430272
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: pcmk_ipc: 
> Recorded connection 0x6744e0 for stonithd/21696
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: info: crm_new_peer: Node 
> 352430272 is now known as vmstore1
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: notice: 
> /usr/lib64/heartbeat/stonithd start up successfully.
>
> Mar 16 07:05:25 vmstore1 stonithd: [21696]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 17
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: pcmk_ipc: 
> Recorded connection 0x676380 for attrd/21700
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: Invoked: 
> /usr/lib64/heartbeat/attrd
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: main: Starting up
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: crm_cluster_connect: 
> Connecting to OpenAIS
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: init_ais_connection: 
> Creating connection to our AIS plugin
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: init_ais_connection: 
> AIS connection established
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: get_ais_nodeid: Server 
> details: id=352430272 uname=vmstore1 cname=pcmk
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: crm_new_peer: Node 
> vmstore1 now has id: 352430272
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: crm_new_peer: Node 
> 352430272 is now known as vmstore1
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: main: Cluster 
> connection active
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: main: Accepting 
> attribute updates
>
> Mar 16 07:05:25 vmstore1 attrd: [21700]: info: main: Starting mainloop...
>
> Mar 16 07:05:25 vmstore1 mgmtd: [21703]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 10
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: Invoked: 
> /usr/lib64/heartbeat/crmd
>
> Mar 16 07:05:25 vmstore1 mgmtd: [21703]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 12
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: main: CRM Hg Version: 
> d3fa20fc76c7947d6de66db7e52526dc6bd7d782
>
> Mar 16 07:05:25 vmstore1 mgmtd: [21703]: info: init_crm
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: retrieveCib: Reading 
> cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
> /var/lib/heartbeat/crm/cib.xml.sig)
>
> Mar 16 07:05:25 vmstore1 mgmtd: [21703]: info: login to cib: 0, ret:-10
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: startCib: CIB 
> Initialization completed successfully
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: crm_cluster_connect: 
> Connecting to OpenAIS
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: pcmk_ipc: 
> Recorded connection 0x676ff0 for cib/21697
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: pcmk_ipc: 
> Sending membership update 11508 to cib
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: init_ais_connection: 
> Creating connection to our AIS plugin
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: init_ais_connection: AIS 
> connection established
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: get_ais_nodeid: Server 
> details: id=352430272 uname=vmstore1 cname=pcmk
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: crm_new_peer: Node 
> vmstore1 now has id: 352430272
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: crm_new_peer: Node 
> 352430272 is now known as vmstore1
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: cib_init: Starting cib 
> mainloop
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: ais_dispatch: Membership 
> 11508: quorum still lost
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: crm_update_peer: Node 
> vmstore1: id=352430272 state=member (new) addr=r(0) ip(192.168.1.21) 
> (new) votes=1 (new) born=0 seen=11508 
> proc=00000000000000000000000000053312 (new)
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: crmd_init: Starting crmd
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: 
> G_main_add_SignalHandler: Added signal handler for signal 17
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: pcmk_ipc: 
> Recorded connection 0x677ce0 for crmd/21702
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: do_cib_control: CIB 
> connection established
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: crm_cluster_connect: 
> Connecting to OpenAIS
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: init_ais_connection: 
> Creating connection to our AIS plugin
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: init_ais_connection: AIS 
> connection established
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: get_ais_nodeid: Server 
> details: id=352430272 uname=vmstore1 cname=pcmk
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: crm_new_peer: Node 
> vmstore1 now has id: 352430272
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: crm_new_peer: Node 
> 352430272 is now known as vmstore1
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: do_ha_control: Connected 
> to the cluster
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: do_started: Delaying 
> start, CCM (0000000000100000) not connected
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: crmd_init: Starting 
> crmd's mainloop
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: config_query_callback: 
> Checking for expired actions every 900000ms
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: ais_dispatch: Membership 
> 11508: quorum still lost
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: crm_update_peer: Node 
> vmstore1: id=352430272 state=member (new) addr=r(0) ip(192.168.1.21) 
> (new) votes=1 (new) born=0 seen=11508 
> proc=00000000000000000000000000053312 (new)
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: pcmk_ipc: 
> Sending membership update 11508 to crmd
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> update_expected_votes: Expected quorum votes 1024 -> 2
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: do_started: The local 
> CRM is operational
>
> Mar 16 07:05:25 vmstore1 crmd: [21702]: info: do_state_transition: 
> State transition S_STARTING -> S_PENDING [ input=I_PENDING 
> cause=C_FSA_INTERNAL origin=do_started ]
>
> Mar 16 07:05:25 vmstore1 cib: [21708]: info: write_cib_contents: 
> Archived previous version as /var/lib/heartbeat/crm/cib-2.raw
>
> Mar 16 07:05:25 vmstore1 cib: [21708]: info: write_cib_contents: Wrote 
> version 0.100.0 of the CIB to disk (digest: 
> eda82ffa55c5172f7b86363fd9031e6c)
>
> Mar 16 07:05:25 vmstore1 cib: [21708]: info: retrieveCib: Reading 
> cluster configuration from: /var/lib/heartbeat/crm/cib.yYxNE3 (digest: 
> /var/lib/heartbeat/crm/cib.WDLFgi)
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] notice: 
> pcmk_peer_update: Transitional membership event on ring 11512: memb=1, 
> new=0, lost=0
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> pcmk_peer_update: memb: vmstore1 352430272
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] notice: 
> pcmk_peer_update: Stable membership event on ring 11512: memb=2, 
> new=1, lost=0
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> Creating entry for node 369207488 born on 11512
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> Node 369207488/unknown is now: member
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> pcmk_peer_update: NEW: .pending. 369207488
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> pcmk_peer_update: MEMB: vmstore1 352430272
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> pcmk_peer_update: MEMB: .pending. 369207488
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> send_member_notification: Sending membership update 11512 to 2 children
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> 0x63d2a0 Node 352430272 ((null)) born on: 11512
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [TOTEM ] A processor joined 
> or left the membership and a new membership was formed.
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: notice: ais_dispatch: 
> Membership 11512: quorum acquired
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: crm_new_peer: Node <null> 
> now has id: 369207488
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: crm_update_peer: Node 
> (null): id=369207488 state=member (new) addr=r(0) ip(192.168.1.22) 
> votes=0 born=0 seen=11512 proc=00000000000000000000000000000000
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> 0x6399b0 Node 369207488 (vmstore2) born on: 11492
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> 0x6399b0 Node 369207488 now known as vmstore2 (was: (null))
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> Node vmstore2 now has process list: 00000000000000000000000000053312 
> (340754)
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: update_member: 
> Node vmstore2 now has 1 quorum votes (was 0)
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [pcmk ] info: 
> send_member_notification: Sending membership update 11512 to 2 children
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: ais_dispatch: Membership 
> 11512: quorum retained
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: crm_get_peer: Node 
> 369207488 is now known as vmstore2
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: crm_update_peer: Node 
> vmstore2: id=369207488 state=member addr=r(0) ip(192.168.1.22) votes=1 
> (new) born=11492 seen=11512 proc=00000000000000000000000000053312 (new)
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: cib_process_diff: Diff 
> 0.101.1 -> 0.101.2 not applied to 0.100.0: current "epoch" is less 
> than required
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: cib_server_process_diff: 
> Requesting re-sync from peer
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: WARN: cib_diff_notify: 
> Local-only Change (client:crmd, call: 114): 0.0.0 (Application of an 
> update diff failed, requesting a full refresh)
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: WARN: cib_server_process_diff: 
> Not applying diff 0.101.2 -> 0.101.3 (sync in progress)
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: WARN: cib_server_process_diff: 
> Not applying diff 0.101.3 -> 0.101.4 (sync in progress)
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: WARN: cib_server_process_diff: 
> Not applying diff 0.101.4 -> 0.102.1 (sync in progress)
>
> Mar 16 07:05:25 vmstore1 cib: [21697]: info: cib_replace_notify: 
> Replaced: 0.100.0 -> 0.102.1 from vmstore2
>
> Mar 16 07:05:25 vmstore1 cib: [21710]: info: write_cib_contents: 
> Archived previous version as /var/lib/heartbeat/crm/cib-3.raw
>
> Mar 16 07:05:25 vmstore1 cib: [21710]: info: write_cib_contents: Wrote 
> version 0.102.0 of the CIB to disk (digest: 
> d8b60f5febb5d2520c7391a739ff58d0)
>
> Mar 16 07:05:25 vmstore1 cib: [21710]: info: retrieveCib: Reading 
> cluster configuration from: /var/lib/heartbeat/crm/cib.1BUB03 (digest: 
> /var/lib/heartbeat/crm/cib.VWmiYi)
>
> Mar 16 07:05:25 vmstore1 corosync[21689]: [MAIN ] Completed service 
> synchronization, ready to provide service.
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: ais_dispatch: Membership 
> 11508: quorum still lost
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: notice: ais_dispatch: 
> Membership 11512: quorum acquired
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: crm_new_peer: Node 
> <null> now has id: 369207488
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: crm_update_peer: Node 
> (null): id=369207488 state=member (new) addr=r(0) ip(192.168.1.22) 
> votes=0 born=0 seen=11512 proc=00000000000000000000000000000000
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: ais_dispatch: Membership 
> 11512: quorum retained
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: crm_get_peer: Node 
> 369207488 is now known as vmstore2
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: crm_update_peer: Node 
> vmstore2: id=369207488 state=member addr=r(0) ip(192.168.1.22) votes=1 
> (new) born=11492 seen=11512 proc=00000000000000000000000000053312 (new)
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: update_dc: Set DC to 
> vmstore2 (3.0.1)
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: find_hash_entry: 
> Creating hash entry for terminate
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: update_attrd: Connecting 
> to attrd...
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: find_hash_entry: 
> Creating hash entry for shutdown
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: do_state_transition: 
> State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC 
> cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_local_callback: 
> Sending full refresh (origin=crmd)
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_trigger_update: 
> Sending flush op to all hosts for: shutdown (<null>)
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_perform_update: 
> Delaying operation shutdown=<null>: cib not connected
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_trigger_update: 
> Sending flush op to all hosts for: terminate (<null>)
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_perform_update: 
> Delaying operation terminate=<null>: cib not connected
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_perform_update: 
> Delaying operation shutdown=<null>: cib not connected
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_perform_update: 
> Delaying operation terminate=<null>: cib not connected
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: erase_xpath_callback: 
> Deletion of "//node_state[@uname='vmstore1']/transient_attributes": ok 
> (rc=0)
>
> Mar 16 07:05:26 vmstore1 crmd: [21702]: info: erase_xpath_callback: 
> Deletion of "//node_state[@uname='vmstore1']/lrm": ok (rc=0)
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: crm_new_peer: Node 
> vmstore2 now has id: 369207488
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: crm_new_peer: Node 
> 369207488 is now known as vmstore2
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_perform_update: 
> Delaying operation shutdown=<null>: cib not connected
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: find_hash_entry: 
> Creating hash entry for probe_complete
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_trigger_update: 
> Sending flush op to all hosts for: probe_complete (true)
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_perform_update: 
> Delaying operation probe_complete=true: cib not connected
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_perform_update: 
> Delaying operation terminate=<null>: cib not connected
>
> Mar 16 07:05:26 vmstore1 attrd: [21700]: info: attrd_perform_update: 
> Delaying operation probe_complete=true: cib not connected
>
> Mar 16 07:05:27 vmstore1 mgmtd: [21703]: debug: main: run the loop...
>
> Mar 16 07:05:27 vmstore1 mgmtd: [21703]: info: Started.
>
> Mar 16 07:05:30 vmstore1 attrd: [21700]: info: cib_connect: Connected 
> to the CIB after 1 signon attempts
>
> Mar 16 07:05:30 vmstore1 attrd: [21700]: info: cib_connect: Sending 
> full refresh
>
> Mar 16 07:05:30 vmstore1 attrd: [21700]: info: attrd_trigger_update: 
> Sending flush op to all hosts for: shutdown (<null>)
>
> Mar 16 07:05:30 vmstore1 attrd: [21700]: info: attrd_trigger_update: 
> Sending flush op to all hosts for: terminate (<null>)
>
> Mar 16 07:05:30 vmstore1 attrd: [21700]: info: attrd_trigger_update: 
> Sending flush op to all hosts for: probe_complete (true)
>
> Mar 16 07:05:30 vmstore1 attrd: [21700]: info: attrd_perform_update: 
> Sent update 6: probe_complete=true
>
> Mar 16 07:05:43 vmstore1 cibadmin: [21715]: info: Invoked: cibadmin -C 
> -o resources -x resourceIP.xml
>
> Mar 16 07:05:55 vmstore1 cibadmin: [21716]: info: Invoked: cibadmin -Q ~
>
>
>





More information about the Pacemaker mailing list