Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(keepalive,500ms)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(deadtime,2000ms)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(warntime,1001ms)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(initdead,3)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(bcast,bond0)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(udpport,694)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(auto_failback,on)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(node,mgraid-S000030311-0)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(node,mgraid-S000030311-1)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: uid=root, gid=<null>
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: uid=root, gid=<null>
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: uid=<null>, gid=root
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: uid=root, gid=<null>
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: uid=<null>, gid=root
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: Beginning authentication parsing
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: 16 max authentication methods
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: Keyfile opened
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: Keyfile perms OK
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: 16 max authentication methods
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: Found authentication method [sha1]
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: info: AUTH: i=2: key = 0x717bd0, auth=0x7f1b106cd2c0, authname=sha1
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: Outbound signing method is 2
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: Authentication parsing complete [1]
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(cluster,linux-ha)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(hopfudge,1)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(baud,19200)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(hbgenmethod,file)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(realtime,true)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(msgfmt,classic)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(conn_logd_time,60)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(log_badpack,true)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(syslogmsgfmt,true)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(coredumps,true)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(autojoin,none)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(uuidfrom,file)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(compression,zlib)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(compression_threshold,2)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(traditional_compression,no)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(max_rexmit_delay,250)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: Setting max_rexmit_delay to 250 ms
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(record_config_changes,on)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(record_pengine_inputs,on)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(enable_config_writes,on)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: add_option(memreserve,6500)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: WARN: Initial dead time [3000 ms] may be too small!
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: info: Initial dead time accounts for slow network startup time
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: info: It should be >= deadtime and >= 10 seconds
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: WARN: Logging daemon is disabled --enabling logging daemon is recommended
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: info: **************************
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: info: Configuration validated. Starting heartbeat 3.0.2
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: debug: HA configuration OK.  Heartbeat starting.
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16607]: info: Heartbeat Hg Version: node: 645cec2ec68eb0cd41aa12ce282a23df45885561
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: info: heartbeat: version 3.0.2
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: info: Heartbeat generation: 1302837504
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: uuid is:856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: FIFO process pid: 16614
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: opening bcast bond0 (UDP/IP broadcast)
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: glib: SO_BINDTODEVICE(r) set for device bond0
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16614]: info: Stack hogger failed 0xffffffff
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: info: glib: UDP Broadcast heartbeat started on port 694 (694) interface bond0
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: write process pid: 16615
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: read child process pid: 16616
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16615]: info: Stack hogger failed 0xffffffff
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: info: glib: UDP Broadcast heartbeat closed on port 694 interface bond0 - Status: 1
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: make_io_childpair: CREATED childpair wchan socket 8
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: make_io_childpair: CREATED childpair rchan socket 10
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: info: G_main_add_TriggerHandler: Added signal manual handler
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: info: G_main_add_TriggerHandler: Added signal manual handler
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: Limiting CPU: 42 CPU seconds every 60000 milliseconds
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16614]: debug: pid 16614 locked in memory.
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16614]: debug: Limiting CPU: 6 CPU seconds every 60000 milliseconds
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16615]: debug: pid 16615 locked in memory.
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16615]: debug: Limiting CPU: 24 CPU seconds every 60000 milliseconds
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: info: Stack hogger failed 0xffffffff
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: pid 16608 locked in memory.
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: Waiting for child processes to start
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: info: Local status now set to: 'up'
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: All your child process are belong to us
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: Starting local status message @ 500 ms intervals
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: debug: Forking temp process write_hostcachedata
Apr 14 20:33:47 mgraid-S000030311-1 heartbeat: [16608]: info: Managed write_hostcachedata process 16621 exited with return code 0.
Apr 14 20:33:48 mgraid-S000030311-1 heartbeat: [16616]: info: Stack hogger failed 0xffffffff
Apr 14 20:33:48 mgraid-S000030311-1 heartbeat: [16616]: debug: pid 16616 locked in memory.
Apr 14 20:33:48 mgraid-S000030311-1 heartbeat: [16616]: debug: Limiting CPU: 6 CPU seconds every 60000 milliseconds
Apr 14 20:33:48 mgraid-S000030311-1 heartbeat: [16608]: info: Link mgraid-s000030311-1:bond0 up.
Apr 14 20:33:49 mgraid-S000030311-1 heartbeat: [16608]: info: Link mgraid-s000030311-0:bond0 up.
Apr 14 20:33:49 mgraid-S000030311-1 heartbeat: [16608]: debug: sending reqnodes msg to node mgraid-s000030311-0
Apr 14 20:33:49 mgraid-S000030311-1 heartbeat: [16608]: info: Status update for node mgraid-s000030311-0: status up
Apr 14 20:33:49 mgraid-S000030311-1 heartbeat: [16608]: debug: Status seqno: 2 msgtime: 1302838429
Apr 14 20:33:49 mgraid-S000030311-1 heartbeat: [16608]: debug: Forking temp process write_hostcachedata
Apr 14 20:33:49 mgraid-S000030311-1 heartbeat: [16608]: info: Managed write_hostcachedata process 16626 exited with return code 0.
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Get a reqnodes message from mgraid-s000030311-0
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: get_delnodelist: delnodelist= 
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Get a repnodes msg from mgraid-s000030311-0
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: nodelist received:mgraid-s000030311-0 mgraid-s000030311-1 
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Comm_now_up(): updating status to active
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Local status now set to: 'active'
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Starting child client "/lib64/heartbeat/ccm" (0,0)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Starting child client "/lib64/heartbeat/cib" (0,0)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16630]: info: Starting "/lib64/heartbeat/ccm" as uid 0  gid 0 (pid 16630)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Starting child client "/lib64/heartbeat/lrmd -r" (0,0)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16631]: info: Starting "/lib64/heartbeat/cib" as uid 0  gid 0 (pid 16631)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Starting child client "/lib64/heartbeat/stonithd" (0,0)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16632]: info: Starting "/lib64/heartbeat/lrmd -r" as uid 0  gid 0 (pid 16632)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Starting child client "/lib64/heartbeat/attrd" (0,0)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Starting child client "/lib64/heartbeat/crmd" (0,0)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16633]: info: Starting "/lib64/heartbeat/stonithd" as uid 0  gid 0 (pid 16633)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16634]: info: Starting "/lib64/heartbeat/attrd" as uid 0  gid 0 (pid 16634)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16635]: info: Starting "/lib64/heartbeat/crmd" as uid 0  gid 0 (pid 16635)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Forking temp process write_hostcachedata
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Forking temp process write_delcachedata
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Managed write_hostcachedata process 16636 exited with return code 0.
Apr 14 20:33:50 mgraid-S000030311-1 ccm: [16630]: debug: Signing in with Heartbeat
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: APIregistration_dispatch() {
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: process_registerevent() {
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: client->gsource = 0x726090
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: }/*process_registerevent*/;
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: }/*APIregistration_dispatch*/;
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Checking client authorization for client ccm (0:0)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: create_seq_snapshot_table:no missing packets found for node mgraid-s000030311-0
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: create_seq_snapshot_table:no missing packets found for node mgraid-s000030311-1
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Signing on API client 16630 (ccm)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Managed write_delcachedata process 16637 exited with return code 0.
Apr 14 20:33:50 mgraid-S000030311-1 ccm: [16630]: info: Hostname: mgraid-s000030311-1
Apr 14 20:33:50 mgraid-S000030311-1 lrmd: [16632]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: info: Stack hogger failed 0xffffffff
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: info: Invoked: /lib64/heartbeat/cib 
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: debug: pid 16633 locked in memory.
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: info: G_main_add_TriggerHandler: Added signal manual handler
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: info: crm_cluster_connect: Connecting to Heartbeat
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Apr 14 20:33:50 mgraid-S000030311-1 attrd: [16634]: info: Invoked: /lib64/heartbeat/attrd 
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: info: Invoked: /lib64/heartbeat/crmd 
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: APIregistration_dispatch() {
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: process_registerevent() {
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: client->gsource = 0x7256c0
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: info: main: CRM Hg Version: 89bd754939df5150de7cd76835f98fe90851b677

Apr 14 20:33:50 mgraid-S000030311-1 attrd: [16634]: info: main: Starting up
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: }/*process_registerevent*/;
Apr 14 20:33:50 mgraid-S000030311-1 lrmd: [16632]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: }/*APIregistration_dispatch*/;
Apr 14 20:33:50 mgraid-S000030311-1 attrd: [16634]: info: crm_cluster_connect: Connecting to Heartbeat
Apr 14 20:33:50 mgraid-S000030311-1 lrmd: [16632]: debug: Enabling coredumps
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Checking client authorization for client stonithd (0:0)
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: WARN: retrieveCib: Cluster configuration not found: /var/lib/heartbeat/crm/cib.xml
Apr 14 20:33:50 mgraid-S000030311-1 attrd: [16634]: debug: register_heartbeat_conn: Signing in with Heartbeat
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: info: crmd_init: Starting crmd
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: create_seq_snapshot_table:no missing packets found for node mgraid-s000030311-0
Apr 14 20:33:50 mgraid-S000030311-1 lrmd: [16632]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: create_seq_snapshot_table:no missing packets found for node mgraid-s000030311-1
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Signing on API client 16633 (stonithd)
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
Apr 14 20:33:50 mgraid-S000030311-1 lrmd: [16632]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: WARN: readCibXmlFile: Primary configuration corrupt or unusable, trying backup...
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_STARTUP
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: do_startup: Registering Signal Handlers
Apr 14 20:33:50 mgraid-S000030311-1 lrmd: [16632]: debug: main: run the loop...
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: APIregistration_dispatch() {
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: info: register_heartbeat_conn: Hostname: mgraid-s000030311-1
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: process_registerevent() {
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: get_last_sequence: Series file /var/lib/heartbeat/crm/cib.last does not exist
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: do_startup: Creating CIB and LRM objects
Apr 14 20:33:50 mgraid-S000030311-1 lrmd: [16632]: info: Started.
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: client->gsource = 0x729b90
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: info: register_heartbeat_conn: UUID: 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: }/*process_registerevent*/;
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: readCibXmlFile: Backup file /var/lib/heartbeat/crm/cib-99.raw not found
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: }/*APIregistration_dispatch*/;
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: debug: Setting message filter mode
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CIB_START
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Checking client authorization for client attrd (0:0)
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: WARN: readCibXmlFile: Continuing with an empty configuration.
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: create_seq_snapshot_table:no missing packets found for node mgraid-s000030311-0
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: log_data_element: readCibXmlFile: [on-disk] <cib epoch="0" num_updates="0" admin_epoch="0" validate-with="pacemaker-1.0" >
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: create_seq_snapshot_table:no missing packets found for node mgraid-s000030311-1
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Signing on API client 16634 (attrd)
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: log_data_element: readCibXmlFile: [on-disk]   <configuration >
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_rw
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: log_data_element: readCibXmlFile: [on-disk]     <crm_config />
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: log_data_element: readCibXmlFile: [on-disk]     <nodes />
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signon_raw: Connection to command channel failed
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: log_data_element: readCibXmlFile: [on-disk]     <resources />
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: log_data_element: readCibXmlFile: [on-disk]     <constraints />
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: debug: apichan=0x6182f0
Apr 14 20:33:50 mgraid-S000030311-1 attrd: [16634]: info: register_heartbeat_conn: Hostname: mgraid-s000030311-1
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_callback
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: log_data_element: readCibXmlFile: [on-disk]   </configuration>
Apr 14 20:33:50 mgraid-S000030311-1 attrd: [16634]: info: register_heartbeat_conn: UUID: 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signon_raw: Connection to callback channel failed
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: log_data_element: readCibXmlFile: [on-disk]   <status />
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: debug: callback_chan=0x617f60
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signon_raw: Connection to CIB failed: connection failed
Apr 14 20:33:50 mgraid-S000030311-1 attrd: [16634]: info: main: Cluster connection active
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: log_data_element: readCibXmlFile: [on-disk] </cib>
Apr 14 20:33:50 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: notice: /lib64/heartbeat/stonithd start up successfully.
Apr 14 20:33:50 mgraid-S000030311-1 stonithd: [16633]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Apr 14 20:33:50 mgraid-S000030311-1 attrd: [16634]: info: main: Accepting attribute updates
Apr 14 20:33:50 mgraid-S000030311-1 attrd: [16634]: info: main: Starting mainloop...
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for start op
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: info: startCib: CIB Initialization completed successfully
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: info: crm_cluster_connect: Connecting to Heartbeat
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: debug: register_heartbeat_conn: Signing in with Heartbeat
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: APIregistration_dispatch() {
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: process_registerevent() {
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: client->gsource = 0x7253d0
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: }/*process_registerevent*/;
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: }/*APIregistration_dispatch*/;
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Checking client authorization for client cib (0:0)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: create_seq_snapshot_table:no missing packets found for node mgraid-s000030311-0
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: create_seq_snapshot_table:no missing packets found for node mgraid-s000030311-1
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Signing on API client 16631 (cib)
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: info: register_heartbeat_conn: Hostname: mgraid-s000030311-1
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: info: register_heartbeat_conn: UUID: 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: info: ccm_connect: Registering with CCM...
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: WARN: ccm_connect: CCM Activation failed
Apr 14 20:33:50 mgraid-S000030311-1 cib: [16631]: WARN: ccm_connect: CCM Connection failed 1 times (30 max)
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: info: Status update for node mgraid-s000030311-0: status active
Apr 14 20:33:50 mgraid-S000030311-1 heartbeat: [16608]: debug: Status seqno: 9 msgtime: 1302838430
Apr 14 20:33:50 mgraid-S000030311-1 ccm: [16630]: debug: node state CCM_STATE_NONE -> CCM_STATE_NONE
Apr 14 20:33:50 mgraid-S000030311-1 ccm: [16630]: debug: node state CCM_STATE_NONE -> CCM_STATE_NONE
Apr 14 20:33:50 mgraid-S000030311-1 ccm: [16630]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_rw
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signon_raw: Connection to command channel failed
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_callback
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signon_raw: Connection to callback channel failed
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signon_raw: Connection to CIB failed: connection failed
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: info: do_cib_control: Could not connect to the CIB service: connection failed
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: WARN: do_cib_control: Couldn't complete CIB registration 1 times... pause and retry
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Wait Timer (I_NULL:2000ms), src=5
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x180021000000006, stalled=true
Apr 14 20:33:51 mgraid-S000030311-1 crmd: [16635]: info: crmd_init: Starting crmd's mainloop
Apr 14 20:33:51 mgraid-S000030311-1 ccm: [16630]: debug: recv msg hbapi-clstat from mgraid-s000030311-1, status:join
Apr 14 20:33:52 mgraid-S000030311-1 ccm: [16630]: debug: recv msg status from mgraid-s000030311-0, status:active
Apr 14 20:33:52 mgraid-S000030311-1 ccm: [16630]: debug: status of node mgraid-s000030311-0: up -> active
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: recv msg hbapi-clstat from mgraid-s000030311-0, status:join
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: recv msg CCM_TYPE_PROTOVERSION from mgraid-s000030311-0, status:[null ptr]
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: send msg CCM_TYPE_PROTOVERSION to cluster, status:[null]
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: node state CCM_STATE_NONE -> CCM_STATE_VERSION_REQUEST
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: recv msg CCM_TYPE_PROTOVERSION from mgraid-s000030311-1, status:[null ptr]
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: No quorum selected,using default quorum plugin(majority:twonodes)
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: quorum plugin: majority
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: cluster:linux-ha, member_count=1, member_quorum_votes=100
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: total_node_count=2, total_quorum_votes=200
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: quorum plugin: twonodes
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: cluster:linux-ha, member_count=1, member_quorum_votes=100
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: total_node_count=2, total_quorum_votes=200
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: info: Break tie for 2 nodes cluster
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: node state CCM_STATE_VERSION_REQUEST -> CCM_STATE_JOINED
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: dump current membership 0x7f87776bc010
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: 	leader=mgraid-s000030311-1
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: 	transition=1
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: 	status=CCM_STATE_JOINED
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: 	has_quorum=1
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: 	nodename=mgraid-s000030311-1 bornon=1
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: quorum is 1
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: delivering new membership to 0 clients: 
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: send msg CCM_TYPE_PROTOVERSION_RESP to mgraid-s000030311-0, status:[null]
Apr 14 20:33:53 mgraid-S000030311-1 crmd: [16635]: info: crm_timer_popped: Wait Timer (I_NULL) just popped!
Apr 14 20:33:53 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CIB_START
Apr 14 20:33:53 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:33:53 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_rw
Apr 14 20:33:53 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signon_raw: Connection to command channel failed
Apr 14 20:33:53 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:33:53 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_callback
Apr 14 20:33:53 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signon_raw: Connection to callback channel failed
Apr 14 20:33:53 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signon_raw: Connection to CIB failed: connection failed
Apr 14 20:33:53 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:33:53 mgraid-S000030311-1 cib: [16631]: info: ccm_connect: Registering with CCM...
Apr 14 20:33:53 mgraid-S000030311-1 cib: [16631]: debug: ccm_connect: CCM Activation passed... all set to go!
Apr 14 20:33:53 mgraid-S000030311-1 cib: [16631]: info: cib_init: Requesting the list of configured nodes
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: WARN: ccm_state_joined: received message with unknown cookie, just dropping
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: dump current membership 0x7f87776bc010
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: 	leader=mgraid-s000030311-1
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: 	transition=1
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: 	status=CCM_STATE_JOINED
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: 	has_quorum=1
Apr 14 20:33:53 mgraid-S000030311-1 ccm: [16630]: debug: 	nodename=mgraid-s000030311-1 bornon=1
Apr 14 20:33:53 mgraid-S000030311-1 cib: [16631]: debug: Delaying cstatus request for 174 ms
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: cib_init: Starting cib mainloop
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: cib_client_status_callback: Status update: Client mgraid-s000030311-1/cib now has status [join]
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: debug: crm_new_peer: Creating entry for node mgraid-s000030311-1/0
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_new_peer: Node 0 is now known as mgraid-s000030311-1
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_update_peer_proc: mgraid-s000030311-1.cib is now online
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: mem_handle_event: instance=1, nodes=1, new=1, lost=0, n_idx=0, new_idx=0, old_idx=3
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=1)
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_get_peer: Node mgraid-s000030311-1 now has id: 1
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_update_peer: Node mgraid-s000030311-1: id=1 state=member (new) addr=(null) votes=-1 born=1 seen=1 proc=00000000000000000000000000000100
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_update_peer_proc: mgraid-s000030311-1.ais is now online
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_update_peer_proc: mgraid-s000030311-1.crmd is now online
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: cib_client_status_callback: Status update: Client mgraid-s000030311-0/cib now has status [join]
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: recv msg CCM_TYPE_ALIVE from mgraid-s000030311-0, status:[null ptr]
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: quorum plugin: majority
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: cluster:linux-ha, member_count=2, member_quorum_votes=200
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: total_node_count=2, total_quorum_votes=200
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: send msg CCM_TYPE_MEM_LIST to cluster, status:[null]
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: debug: crm_new_peer: Creating entry for node mgraid-s000030311-0/0
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: dump current membership 0x7f87776bc010
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_new_peer: Node 0 is now known as mgraid-s000030311-0
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	leader=mgraid-s000030311-1
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	transition=2
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_update_peer_proc: mgraid-s000030311-0.cib is now online
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: cib_client_status_callback: Status update: Client mgraid-s000030311-1/cib now has status [online]
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	status=CCM_STATE_JOINED
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	has_quorum=1
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	nodename=mgraid-s000030311-1 bornon=1
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	nodename=mgraid-s000030311-0 bornon=2
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: quorum is 1
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: delivering new membership to 1 clients: 
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: client: pid =16631
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: recv msg CCM_TYPE_MEM_LIST from mgraid-s000030311-1, status:[null ptr]
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: WARN: ccm_state_joined: received message with unknown cookie, just dropping
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16650]: info: write_cib_contents: Wrote version 0.0.0 of the CIB to disk (digest: bbdd9581117d23fb21730393cadf92ed)
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: dump current membership 0x7f87776bc010
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	leader=mgraid-s000030311-1
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	transition=2
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	status=CCM_STATE_JOINED
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	has_quorum=1
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	nodename=mgraid-s000030311-1 bornon=1
Apr 14 20:33:54 mgraid-S000030311-1 ccm: [16630]: debug: 	nodename=mgraid-s000030311-0 bornon=2
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: mem_handle_event: no mbr_track info
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: mem_handle_event: instance=2, nodes=2, new=1, lost=0, n_idx=0, new_idx=2, old_idx=4
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=2)
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_update_peer: Node mgraid-s000030311-0: id=0 state=member (new) addr=(null) votes=-1 born=2 seen=2 proc=00000000000000000000000000000100
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_update_peer_proc: mgraid-s000030311-0.ais is now online
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: crm_update_peer_proc: mgraid-s000030311-0.crmd is now online
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16650]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.uAIHac (digest: /var/lib/heartbeat/crm/cib.85lTQH)
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 16650 exited with return code 0.
Apr 14 20:33:54 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:33:54 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:33:54 mgraid-S000030311-1 crmd: [16635]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 16635 (97bb3c20-704e-41a8-adac-15d5e8afb30d): on
Apr 14 20:33:54 mgraid-S000030311-1 crmd: [16635]: info: do_cib_control: CIB connection established
Apr 14 20:33:54 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_HA_CONNECT
Apr 14 20:33:54 mgraid-S000030311-1 crmd: [16635]: info: crm_cluster_connect: Connecting to Heartbeat
Apr 14 20:33:54 mgraid-S000030311-1 crmd: [16635]: debug: register_heartbeat_conn: Signing in with Heartbeat
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: debug: APIregistration_dispatch() {
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: debug: process_registerevent() {
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: debug: client->gsource = 0x721900
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: debug: }/*process_registerevent*/;
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: debug: }/*APIregistration_dispatch*/;
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: debug: Checking client authorization for client crmd (0:0)
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: debug: create_seq_snapshot_table:no missing packets found for node mgraid-s000030311-0
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: debug: create_seq_snapshot_table:no missing packets found for node mgraid-s000030311-1
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: debug: Signing on API client 16635 (crmd)
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: WARN: 1 lost packet(s) for [mgraid-s000030311-0] [25:27]
Apr 14 20:33:54 mgraid-S000030311-1 heartbeat: [16608]: info: No pkts missing from mgraid-s000030311-0!
Apr 14 20:33:54 mgraid-S000030311-1 cib: [16631]: info: cib_client_status_callback: Status update: Client mgraid-s000030311-0/cib now has status [online]
Apr 14 20:33:54 mgraid-S000030311-1 crmd: [16635]: info: register_heartbeat_conn: Hostname: mgraid-s000030311-1
Apr 14 20:33:54 mgraid-S000030311-1 crmd: [16635]: info: register_heartbeat_conn: UUID: 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:33:54 mgraid-S000030311-1 crmd: [16635]: debug: Delaying cstatus request for 112 ms
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: do_ha_control: Connected to the cluster
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_READCONFIG
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LRM_CONNECT
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_lrm_control: Connecting to the LRM
Apr 14 20:33:55 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client crmd [16635] registered
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_lrm_control: LRM connection established
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CCM_CONNECT
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: do_ccm_control: CCM connection established... waiting for first callback
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_STARTED
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: do_started: Delaying start, CCM (0000000000100000) not connected
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000000100 (R_CIB_CONNECTED)
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000000800 (R_LRM_CONNECTED)
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 3 : Parsing CIB options
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value 'none' for cluster option 'dc-version'
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value 'heartbeat' for cluster option 'cluster-infrastructure'
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '60s' for cluster option 'dc-deadtime'
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: notice: crmd_client_status_callback: Status update: Client mgraid-s000030311-1/crmd now has status [online] (DC=false)
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: crm_new_peer: Creating entry for node mgraid-s000030311-1/0
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crm_new_peer: Node 0 is now known as mgraid-s000030311-1
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crm_update_peer_proc: mgraid-s000030311-1.crmd is now online
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crmd_client_status_callback: Not the DC
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: notice: crmd_client_status_callback: Status update: Client mgraid-s000030311-0/crmd now has status [online] (DC=false)
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: crm_new_peer: Creating entry for node mgraid-s000030311-0/0
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crm_new_peer: Node 0 is now known as mgraid-s000030311-0
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crm_update_peer_proc: mgraid-s000030311-0.crmd is now online
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crmd_client_status_callback: Not the DC
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: notice: crmd_client_status_callback: Status update: Client mgraid-s000030311-1/crmd now has status [online] (DC=false)
Apr 14 20:33:55 mgraid-S000030311-1 attrd: [16634]: debug: cib_connect: CIB signon attempt 1
Apr 14 20:33:55 mgraid-S000030311-1 attrd: [16634]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:33:55 mgraid-S000030311-1 attrd: [16634]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:33:55 mgraid-S000030311-1 attrd: [16634]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:33:55 mgraid-S000030311-1 attrd: [16634]: info: cib_connect: Connected to the CIB after 1 signon attempts
Apr 14 20:33:55 mgraid-S000030311-1 attrd: [16634]: info: cib_connect: Sending full refresh
Apr 14 20:33:55 mgraid-S000030311-1 cib: [16631]: debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 16634 (0771e179-c0b2-4824-ba9e-9e9f834f54d7): on
Apr 14 20:33:55 mgraid-S000030311-1 heartbeat: [16608]: WARN: 1 lost packet(s) for [mgraid-s000030311-0] [31:33]
Apr 14 20:33:55 mgraid-S000030311-1 heartbeat: [16608]: info: No pkts missing from mgraid-s000030311-0!
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crmd_client_status_callback: Not the DC
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: notice: crmd_client_status_callback: Status update: Client mgraid-s000030311-0/crmd now has status [online] (DC=false)
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crmd_client_status_callback: Not the DC
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: mem_handle_event: instance=2, nodes=2, new=2, lost=0, n_idx=0, new_idx=0, old_idx=4
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=2)
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: ccm_event_detail: NEW MEMBERSHIP: trans=2, nodes=2, new=2, lost=0 n_idx=0, new_idx=0, old_idx=4
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: ccm_event_detail: 	CURRENT: mgraid-s000030311-1 [nodeid=1, born=1]
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: ccm_event_detail: 	CURRENT: mgraid-s000030311-0 [nodeid=0, born=2]
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: ccm_event_detail: 	NEW:     mgraid-s000030311-1 [nodeid=1, born=1]
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: ccm_event_detail: 	NEW:     mgraid-s000030311-0 [nodeid=0, born=2]
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crm_get_peer: Node mgraid-s000030311-1 now has id: 1
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crm_update_peer: Node mgraid-s000030311-1: id=1 state=member (new) addr=(null) votes=-1 born=1 seen=2 proc=00000000000000000000000000000200
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crm_update_peer_proc: mgraid-s000030311-1.ais is now online
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crm_update_peer: Node mgraid-s000030311-0: id=0 state=member (new) addr=(null) votes=-1 born=2 seen=2 proc=00000000000000000000000000000200
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: crm_update_peer_proc: mgraid-s000030311-0.ais is now online
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: post_cache_update: Updated cache after membership event 2.
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: post_cache_update: post_cache_update added action A_ELECTION_CHECK to the FSA
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_STARTED
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_started: Init server comms
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: do_started: The local CRM is operational
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Ignore election check: we not in an election
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:33:55 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_QUERY
Apr 14 20:33:56 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_query: Querying for a DC
Apr 14 20:33:56 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Apr 14 20:33:56 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:60000ms), src=13
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_DC_TIMEOUT: [ state=S_PENDING cause=C_TIMER_POPPED origin=crm_timer_popped ]
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_WARN  
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 2
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=14
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 2 (current: 2, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:34:56 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: info: do_election_count_vote: Election 2 (owner: f4e5e15c-d06b-4e37-89b9-4621af05128f) pass: vote from mgraid-s000030311-0 (Age)
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 3
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=14
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 3 (current: 3, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:34:57 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 3 (current: 3, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: info: do_te_control: Registering TE UUID: 469a0e5c-f535-4c85-84b2-fd971ee76592
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: WARN: cib_client_add_notify_callback: Callback already present
Apr 14 20:34:58 mgraid-S000030311-1 cib: [16631]: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 16635 (97bb3c20-704e-41a8-adac-15d5e8afb30d): on
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: info: set_graph_functions: Setting custom graph functions
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: Transitioner is now active
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:34:58 mgraid-S000030311-1 crmd: [16986]: debug: start_subsystem: Executing "/lib64/heartbeat/pengine (pengine)" (pid 16986)
Apr 14 20:34:58 mgraid-S000030311-1 pengine: [16986]: info: Invoked: /lib64/heartbeat/pengine 
Apr 14 20:34:58 mgraid-S000030311-1 pengine: [16986]: debug: main: Checking for old instances of pengine
Apr 14 20:34:58 mgraid-S000030311-1 pengine: [16986]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:34:58 mgraid-S000030311-1 pengine: [16986]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/pengine
Apr 14 20:34:58 mgraid-S000030311-1 pengine: [16986]: debug: main: Init server comms
Apr 14 20:34:58 mgraid-S000030311-1 pengine: [16986]: info: main: Starting pengine
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=18
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/6, version=0.0.0): ok (rc=0)
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_modify op
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="0" num_updates="0" />
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib crm_feature_set="3.0.1" admin_epoch="0" epoch="1" num_updates="1" />
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/7, version=0.1.1): ok (rc=0)
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/configuration/crm_config//nvpair[@name='dc-version'] does not exist
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Apr 14 20:35:01 mgraid-S000030311-1 cib: [17013]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-0.raw
Apr 14 20:35:01 mgraid-S000030311-1 cib: [17013]: info: write_cib_contents: Wrote version 0.1.0 of the CIB to disk (digest: ac157315afe588d1a2044329101fa89b)
Apr 14 20:35:01 mgraid-S000030311-1 cib: [17013]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.2yGVS3 (digest: /var/lib/heartbeat/crm/cib.hE4eap)
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_modify op
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="1" num_updates="1" />
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="2" num_updates="1" >
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <crm_config >
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" __crm_diff_marker__="added:top" >
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </cluster_property_set>
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </crm_config>
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/10, version=0.2.1): ok (rc=0)
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] does not exist
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: initialize_join: join-1: Initializing join data (flag=true)
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: info: join_make_offer: Making join offers based on membership 2
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-1: Sending offer to mgraid-s000030311-0
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-1: Sending offer to mgraid-s000030311-1
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000000001 (R_THE_DC)
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000000010 (R_JOIN_OK)
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000000080 (R_INVOKE_PE)
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000000200 (R_PE_CONNECTED)
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000000400 (R_TE_CONNECTED)
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000002000 (R_PE_REQUIRED)
Apr 14 20:35:01 mgraid-S000030311-1 crmd: [16635]: info: te_connect_stonith: Attempting connection to fencing daemon...
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_modify op
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="2" num_updates="1" />
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="3" num_updates="1" >
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <crm_config >
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" __crm_diff_marker__="added:top" />
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </cluster_property_set>
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </crm_config>
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/13, version=0.3.1): ok (rc=0)
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17013 exited with return code 0.
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:01 mgraid-S000030311-1 cib: [17014]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-1.raw
Apr 14 20:35:01 mgraid-S000030311-1 cib: [17014]: info: write_cib_contents: Wrote version 0.3.0 of the CIB to disk (digest: 4eb33a37f260e1e5eadfa303bd0ac828)
Apr 14 20:35:01 mgraid-S000030311-1 cib: [17014]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.dUwm55 (digest: /var/lib/heartbeat/crm/cib.zc99qr)
Apr 14 20:35:01 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17014 exited with return code 0.
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: stonithd_signon: creating connection
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: sending out the signon msg.
Apr 14 20:35:02 mgraid-S000030311-1 stonithd: [16633]: debug: client tengine (pid=16635) succeeded to signon to stonithd.
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: signed on to stonithd.
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: info: te_connect_stonith: Connected
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 14 : Parsing CIB options
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '60s' for cluster option 'dc-deadtime'
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_OFFER: join-1
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: info: update_dc: Set DC to mgraid-s000030311-1 (3.0.1)
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: join_query_callback: Respond to join offer join-1
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: join_query_callback: Acknowledging mgraid-s000030311-1 as our DC
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: Processing req from mgraid-s000030311-0
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-1: Welcoming node mgraid-s000030311-0 (ref join_request-crmd-1302838502-5)
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-1
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-1: Still waiting on 1 outstanding offers
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: Processing req from mgraid-s000030311-1
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: mgraid-s000030311-1 has a better generation number than the current max mgraid-s000030311-0
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: do_dc_join_filter_offer: Max generation <generation_tuple validate-with="pacemaker-1.0" crm_feature_set="3.0.1" admin_epoch="0" epoch="3" num_updates="1" />
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: do_dc_join_filter_offer: Their generation <generation_tuple epoch="3" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.0" crm_feature_set="3.0.1" />
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-1: Welcoming node mgraid-s000030311-1 (ref join_request-crmd-1302838502-6)
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-1
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-1: Integration of 2 peers complete: do_dc_join_filter_offer
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=23
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_finalize: Finializing join-1 for 2 clients
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_finalize: join-1: Syncing the CIB from mgraid-s000030311-1 to the rest of the cluster
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000020000 (R_HAVE_CIB)
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: debug: sync_our_cib: Syncing CIB to all peers
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/16, version=0.3.1): ok (rc=0)
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-1: Still waiting on 2 integrated nodes
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: finalize_sync_callback: Notifying 2 clients of join-1 results
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: finalize_join_for: join-1: ACK'ing join request from mgraid-s000030311-0, state member
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: finalize_join_for: join-1: ACK'ing join request from mgraid-s000030311-1, state member
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_modify op
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="3" num_updates="1" />
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="4" num_updates="1" >
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <nodes >
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" __crm_diff_marker__="added:top" />
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </nodes>
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/17, version=0.4.1): ok (rc=0)
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_modify op
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="4" num_updates="1" />
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="5" num_updates="1" >
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <nodes >
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" __crm_diff_marker__="added:top" />
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </nodes>
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/18, version=0.5.1): ok (rc=0)
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:02 mgraid-S000030311-1 cib: [17020]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-2.raw
Apr 14 20:35:02 mgraid-S000030311-1 cib: [17020]: info: write_cib_contents: Wrote version 0.5.0 of the CIB to disk (digest: acbb7d1ff23a6b30c948d1ea5d258314)
Apr 14 20:35:02 mgraid-S000030311-1 cib: [17020]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.KZFrPF (digest: /var/lib/heartbeat/crm/cib.CFsE34)
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17020 exited with return code 0.
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_RESULT: join-1
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_finalize_respond: Confirming join join-1: join_ack_nack
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_finalize_respond: join-1: Join complete.  Sending local LRM status to mgraid-s000030311-1
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: info: update_attrd: Connecting to attrd...
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: //node_state[@uname='mgraid-s000030311-1']/transient_attributes was already removed
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: attrd_update: Sent update: terminate=(null) for mgraid-s000030311-1
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: attrd_update: Sent update: shutdown=(null) for mgraid-s000030311-1
Apr 14 20:35:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crmd: terminate=<null>
Apr 14 20:35:02 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for terminate
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from mgraid-s000030311-1
Apr 14 20:35:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Apr 14 20:35:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crmd: shutdown=<null>
Apr 14 20:35:02 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for shutdown
Apr 14 20:35:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Apr 14 20:35:02 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='mgraid-s000030311-1']/transient_attributes (origin=local/crmd/19, version=0.5.1): ok (rc=0)
Apr 14 20:35:02 mgraid-S000030311-1 crmd: [16635]: info: erase_xpath_callback: Deletion of "//node_state[@uname='mgraid-s000030311-1']/transient_attributes": ok (rc=0)
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_ack: join-1: Updating node state to member for mgraid-s000030311-0
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: join-1: Registered callback for LRM update 21
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_ack: join-1: Updating node state to member for mgraid-s000030311-1
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: join-1: Registered callback for LRM update 23
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: //node_state[@uname='mgraid-s000030311-0']/lrm was already removed
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='mgraid-s000030311-0']/lrm (origin=local/crmd/20, version=0.5.1): ok (rc=0)
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: erase_xpath_callback: Deletion of "//node_state[@uname='mgraid-s000030311-0']/lrm": ok (rc=0)
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: join_update_complete_callback: Join update 21 complete
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-1 complete: join_update_complete_callback
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: //node_state[@uname='mgraid-s000030311-1']/lrm was already removed
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='mgraid-s000030311-1']/lrm (origin=local/crmd/22, version=0.5.2): ok (rc=0)
Apr 14 20:35:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: //node_state[@uname='mgraid-s000030311-0']/transient_attributes was already removed
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: attrd_update: Sent update: (null)=(null) for localhost
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: crm_update_quorum: Updating quorum status to true (call=26)
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: inactive
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=1) : Peer Cancelled
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: do_pe_invoke: Query 27: Requesting the current CIB: S_POLICY_ENGINE
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: erase_xpath_callback: Deletion of "//node_state[@uname='mgraid-s000030311-1']/lrm": ok (rc=0)
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.5.2 -> 0.5.3 (S_POLICY_ENGINE)
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: join_update_complete_callback: Join update 23 complete
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='mgraid-s000030311-0']/transient_attributes (origin=mgraid-s000030311-0/crmd/9, version=0.5.3): ok (rc=0)
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/24, version=0.5.3): ok (rc=0)
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_modify op
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.5.3 -> 0.6.1 (S_POLICY_ENGINE)
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: need_abort: Aborting on change to have-quorum
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: do_pe_invoke: Query 28: Requesting the current CIB: S_POLICY_ENGINE
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="5" num_updates="3" />
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib have-quorum="1" dc-uuid="856c1f72-7cd1-4906-8183-8be87eef96f2" admin_epoch="0" epoch="6" num_updates="1" />
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/26, version=0.6.1): ok (rc=0)
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: do_pe_invoke_callback: Invoking the PE: query=28, ref=pe_calc-dc-1302838503-10, seq=2, quorate=1
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'stop' for cluster option 'no-quorum-policy'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'stonith-enabled'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Apr 14 20:35:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Apr 14 20:35:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Apr 14 20:35:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '20s' for cluster option 'default-action-timeout'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Apr 14 20:35:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Apr 14 20:35:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Apr 14 20:35:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Apr 14 20:35:03 mgraid-S000030311-1 cib: [17026]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-3.raw
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: STONITH timeout: 60000
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: STONITH of failed nodes is enabled
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Stop all active resources: false
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Default stickiness: 0
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 14 20:35:03 mgraid-S000030311-1 cib: [17026]: info: write_cib_contents: Wrote version 0.6.0 of the CIB to disk (digest: d443d4a9c689152287169db286c0adfc)
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Apr 14 20:35:03 mgraid-S000030311-1 cib: [17026]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.n0bWwM (digest: /var/lib/heartbeat/crm/cib.fxPLre)
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: info: determine_online_status: Node mgraid-s000030311-0 is online
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: info: determine_online_status: Node mgraid-s000030311-1 is online
Apr 14 20:35:03 mgraid-S000030311-1 pengine: [16986]: info: stage6: Delaying fencing operations until there are resources to manage
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:03 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: unpack_graph: Unpacked transition 0: 2 actions in 2 synapses
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1302838503-10) derived from /var/lib/pengine/pe-input-3.bz2
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 2: probe_complete probe_complete on mgraid-s000030311-0 - no waiting
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17026 exited with return code 0.
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on mgraid-s000030311-1 (local) - no waiting
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: attrd_update: Sent update: probe_complete=true for localhost
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 0 (Complete=0, Pending=0, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for probe_complete
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: run_graph: ====================================================
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: true, Current: (null), Stored: (null)
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: notice: run_graph: Transition 0 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-3.bz2): Complete
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: te_graph_trigger: Transition 0 is now complete
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: New value of probe_complete is true
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: notify_crmd: Transition 0 status: done - <null>
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: Starting PEngine Recheck Timer
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=36
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: info: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/pengine/pe-input-3.bz2
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: info: process_pe_message: Configuration ERRORs found during PE processing.  Please run "crm_verify -L" to identify issues.
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_perform_update: Sent update 8: probe_complete=true
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.6.1 -> 0.6.2 (S_IDLE)
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=856c1f72-7cd1-4906-8183-8be87eef96f2, magic=NA, cib=0.6.2) : Transient attribute: update
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="856c1f72-7cd1-4906-8183-8be87eef96f2" __crm_diff_marker__="added:top" >
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_pe_invoke: Query 29: Requesting the current CIB: S_POLICY_ENGINE
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 8 for probe_complete=true passed
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_pe_invoke_callback: Invoking the PE: query=29, ref=pe_calc-dc-1302838504-13, seq=2, quorate=1
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'stop' for cluster option 'no-quorum-policy'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'stonith-enabled'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '20s' for cluster option 'default-action-timeout'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: STONITH timeout: 60000
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: STONITH of failed nodes is enabled
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Stop all active resources: false
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Default stickiness: 0
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: info: determine_online_status: Node mgraid-s000030311-0 is online
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: info: determine_online_status: Node mgraid-s000030311-1 is online
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: info: stage6: Delaying fencing operations until there are resources to manage
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: unpack_graph: Unpacked transition 1: 1 actions in 1 synapses
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1302838504-13) derived from /var/lib/pengine/pe-input-4.bz2
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 2: probe_complete probe_complete on mgraid-s000030311-0 - no waiting
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: info: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-4.bz2
Apr 14 20:35:04 mgraid-S000030311-1 pengine: [16986]: info: process_pe_message: Configuration ERRORs found during PE processing.  Please run "crm_verify -L" to identify issues.
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 1 (Complete=0, Pending=0, Fired=1, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-4.bz2): In-progress
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: run_graph: ====================================================
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: notice: run_graph: Transition 1 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-4.bz2): Complete
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: te_graph_trigger: Transition 1 is now complete
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: notify_crmd: Transition 1 status: done - <null>
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: Starting PEngine Recheck Timer
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=38
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Replaced: 0.6.2 -> 0.8.1 from <null>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="6" num_updates="2" admin_epoch="0" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="8" num_updates="1" admin_epoch="0" >
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_replace): 0.6.2 -> 0.8.1 (S_IDLE)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <crm_config >
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: need_abort: Aborting on change to epoch
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <nvpair id="cib-bootstrap-options-dc-deadtime" name="dc-deadtime" value="5s" __crm_diff_marker__="added:top" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </cluster_property_set>
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </crm_config>
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.8.1): ok (rc=0)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [17053]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-4.raw
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [17053]: info: write_cib_contents: Wrote version 0.8.0 of the CIB to disk (digest: 5a8529141c29616ed909066d0d83c6d4)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [17053]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.nO6dx6 (digest: /var/lib/heartbeat/crm/cib.qXOLuz)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17053 exited with return code 0.
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 12 for probe_complete=true passed
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_pe_invoke: Query 32: Requesting the current CIB: S_POLICY_ENGINE
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: update_dc: Unset DC mgraid-s000030311-1
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 4
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=42
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/30, version=0.8.1): ok (rc=0)
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_pe_invoke_callback: Discarding PE request in state: S_ELECTION
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 33 : Parsing CIB options
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 4 (current: 4, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:04 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17083] registered
Apr 14 20:35:04 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17083] disconnected.
Apr 14 20:35:04 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17083] is unregistered
Apr 14 20:35:04 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17084] registered
Apr 14 20:35:04 mgraid-S000030311-1 lrmd: [16632]: debug: stonithRA plugin: provider attribute is not needed and will be ignored.
Apr 14 20:35:04 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17084] disconnected.
Apr 14 20:35:04 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17084] is unregistered
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.10.1 from <null>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="8" admin_epoch="0" num_updates="1" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="10" admin_epoch="0" num_updates="1" >
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <primitive class="stonith" id="mgraid-stonith" type="external/mgpstonith" __crm_diff_marker__="added:top" >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <instance_attributes id="mgraid-stonith-instance_attributes" >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="mgraid-stonith-instance_attributes-hostlist" name="hostlist" value="mgraid-canister" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </instance_attributes>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <operations >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="mgraid-stonith-monitor-0" interval="0" name="monitor" timeout="20s" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </operations>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </primitive>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <rsc_defaults __crm_diff_marker__="added:top" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <op_defaults __crm_diff_marker__="added:top" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.10.1): ok (rc=0)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [17100]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-5.raw
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [17100]: info: write_cib_contents: Wrote version 0.10.0 of the CIB to disk (digest: 34e213edd713de2ee9d4dce327a3b5a8)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [17100]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.G0ZGht (digest: /var/lib/heartbeat/crm/cib.Q3z32W)
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 20 for probe_complete=true passed
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17100 exited with return code 0.
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.13.1 from <null>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="10" admin_epoch="0" num_updates="1" >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   <configuration >
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     <resources >
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       <primitive class="stonith" id="mgraid-stonith" type="external/mgpstonith" __crm_diff_marker__="removed:top" >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <instance_attributes id="mgraid-stonith-instance_attributes" >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="mgraid-stonith-instance_attributes-hostlist" name="hostlist" value="mgraid-canister" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </instance_attributes>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <operations >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="mgraid-stonith-monitor-0" interval="0" name="monitor" timeout="20s" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </operations>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       </primitive>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     </resources>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   </configuration>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - </cib>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="13" admin_epoch="0" num_updates="1" >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <clone id="Fencing" __crm_diff_marker__="added:top" >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <primitive class="stonith" id="mgraid-stonith" type="external/mgpstonith" >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <instance_attributes id="mgraid-stonith-instance_attributes" >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="mgraid-stonith-instance_attributes-hostlist" name="hostlist" value="mgraid-canister" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </instance_attributes>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <operations >
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="mgraid-stonith-monitor-0" interval="0" name="monitor" timeout="20s" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </operations>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </primitive>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </clone>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.13.1): ok (rc=0)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [17130]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-6.raw
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:04 mgraid-S000030311-1 cib: [17130]: info: write_cib_contents: Wrote version 0.13.0 of the CIB to disk (digest: f279adc7903d16c3b76ca2cd0ed34ed9)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [17130]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.OH4t9J (digest: /var/lib/heartbeat/crm/cib.iQD2ue)
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 24 for probe_complete=true passed
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17130 exited with return code 0.
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:04 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/34, version=0.13.1): ok (rc=0)
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:04 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:04 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.15.1 from <null>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="13" admin_epoch="0" num_updates="1" />
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="15" admin_epoch="0" num_updates="1" >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <crm_config >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true" __crm_diff_marker__="added:top" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </cluster_property_set>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </crm_config>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.15.1): ok (rc=0)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17165]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-7.raw
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17165]: info: write_cib_contents: Wrote version 0.15.0 of the CIB to disk (digest: d94840b38f48909724a2b87aaccc4c57)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17165]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.uaoQDQ (digest: /var/lib/heartbeat/crm/cib.j3wdzl)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17165 exited with return code 0.
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 32 for probe_complete=true passed
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17194] registered
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17194] disconnected.
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17194] is unregistered
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17195] registered
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17195] disconnected.
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17195] is unregistered
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.17.1 from <null>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="15" admin_epoch="0" num_updates="1" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="17" admin_epoch="0" num_updates="1" >
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <primitive class="lsb" id="icms" type="S53icms" __crm_diff_marker__="added:top" >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <operations >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="icms-monitor-5s" interval="5s" name="monitor" timeout="7" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="icms-start-0" interval="0" name="start" timeout="5" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </operations>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </primitive>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.17.1): ok (rc=0)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17205]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-8.raw
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17205]: info: write_cib_contents: Wrote version 0.17.0 of the CIB to disk (digest: ec1828a0fcdb4ad12c9f67385c7e2e3d)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17205]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.8OHSP5 (digest: /var/lib/heartbeat/crm/cib.Jaj2nB)
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 36 for probe_complete=true passed
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17205 exited with return code 0.
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/36, version=0.17.2): ok (rc=0)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 38 for probe_complete=true passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 40 for probe_complete=true passed
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Replaced: 0.17.2 -> 0.20.1 from <null>
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="17" num_updates="2" admin_epoch="0" >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   <configuration >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     <resources >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       <primitive class="lsb" id="icms" type="S53icms" __crm_diff_marker__="removed:top" >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <operations >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="icms-monitor-5s" interval="5s" name="monitor" timeout="7" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="icms-start-0" interval="0" name="start" timeout="5" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </operations>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       </primitive>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     </resources>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   </configuration>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - </cib>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="20" num_updates="1" admin_epoch="0" >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <clone id="cloneIcms" __crm_diff_marker__="added:top" >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <primitive class="lsb" id="icms" type="S53icms" >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <operations >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="icms-monitor-5s" interval="5s" name="monitor" timeout="7" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="icms-start-0" interval="0" name="start" timeout="5" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </operations>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </primitive>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </clone>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.20.1): ok (rc=0)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 48 for probe_complete=true passed
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17237]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-9.raw
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17237]: info: write_cib_contents: Wrote version 0.20.0 of the CIB to disk (digest: 8998da9a37b0beba676cc8a3cccc506d)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17237]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.2xmA1p (digest: /var/lib/heartbeat/crm/cib.BakjhW)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17237 exited with return code 0.
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17264] registered
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17264] disconnected.
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17264] is unregistered
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17265] registered
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17265] disconnected.
Apr 14 20:35:05 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17265] is unregistered
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.22.1 from <null>
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="20" admin_epoch="0" num_updates="1" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="22" admin_epoch="0" num_updates="1" >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <primitive class="lsb" id="omserver" type="S49omserver" __crm_diff_marker__="added:top" >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <operations >
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="omserver-monitor-5s" interval="5s" name="monitor" timeout="7" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="omserver-start-0" interval="0" name="start" timeout="5" />
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </operations>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </primitive>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.22.1): ok (rc=0)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 52 for probe_complete=true passed
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17277]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-10.raw
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17277]: info: write_cib_contents: Wrote version 0.22.0 of the CIB to disk (digest: 32f85e296832e7248ad1619f2b3b6c21)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [17277]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.ggj7yI (digest: /var/lib/heartbeat/crm/cib.vrnntf)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17277 exited with return code 0.
Apr 14 20:35:05 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/38, version=0.22.1): ok (rc=0)
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:05 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/40, version=0.22.1): ok (rc=0)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.25.1 from <null>
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="22" admin_epoch="0" num_updates="1" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   <configuration >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     <resources >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       <primitive class="lsb" id="omserver" type="S49omserver" __crm_diff_marker__="removed:top" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <operations >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="omserver-monitor-5s" interval="5s" name="monitor" timeout="7" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="omserver-start-0" interval="0" name="start" timeout="5" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </operations>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       </primitive>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     </resources>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   </configuration>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - </cib>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="25" admin_epoch="0" num_updates="1" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <clone id="cloneOmserver" __crm_diff_marker__="added:top" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <primitive class="lsb" id="omserver" type="S49omserver" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <operations >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="omserver-monitor-5s" interval="5s" name="monitor" timeout="7" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="omserver-start-0" interval="0" name="start" timeout="5" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </operations>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </primitive>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </clone>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.25.1): ok (rc=0)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 60 for probe_complete=true passed
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:06 mgraid-S000030311-1 cib: [17305]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-11.raw
Apr 14 20:35:06 mgraid-S000030311-1 cib: [17305]: info: write_cib_contents: Wrote version 0.25.0 of the CIB to disk (digest: c930f9e5de44f8876aa1352220216c9e)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [17305]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.R09YSR (digest: /var/lib/heartbeat/crm/cib.iCLCsp)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17305 exited with return code 0.
Apr 14 20:35:06 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17308] registered
Apr 14 20:35:06 mgraid-S000030311-1 lrmd: [16632]: info: setting max-children to 8
Apr 14 20:35:06 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17308] disconnected.
Apr 14 20:35:06 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17308] is unregistered
Apr 14 20:35:06 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17382] registered
Apr 14 20:35:06 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17382] disconnected.
Apr 14 20:35:06 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17382] is unregistered
Apr 14 20:35:06 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17383] registered
Apr 14 20:35:06 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17383] disconnected.
Apr 14 20:35:06 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17383] is unregistered
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/42, version=0.25.1): ok (rc=0)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.27.1 from <null>
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="25" admin_epoch="0" num_updates="1" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="27" admin_epoch="0" num_updates="1" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <primitive class="ocf" id="SSS000030311" provider="omneon" type="ss" __crm_diff_marker__="added:top" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <instance_attributes id="SSS000030311-instance_attributes" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSS000030311-instance_attributes-ss_resource" name="ss_resource" value="SSS000030311" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSS000030311-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.S000030311" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </instance_attributes>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <operations >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSS000030311-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSS000030311-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSS000030311-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSS000030311-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </operations>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </primitive>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.27.1): ok (rc=0)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 66 for probe_complete=true passed
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:06 mgraid-S000030311-1 cib: [17401]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-12.raw
Apr 14 20:35:06 mgraid-S000030311-1 cib: [17401]: info: write_cib_contents: Wrote version 0.27.0 of the CIB to disk (digest: 1b496e9376b9ab9932d8ee965f1d8a08)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [17401]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.jjgS2l (digest: /var/lib/heartbeat/crm/cib.Wqj3BU)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17401 exited with return code 0.
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.30.1 from <null>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="27" admin_epoch="0" num_updates="1" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   <configuration >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     <resources >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       <primitive class="ocf" id="SSS000030311" provider="omneon" type="ss" __crm_diff_marker__="removed:top" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <instance_attributes id="SSS000030311-instance_attributes" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSS000030311-instance_attributes-ss_resource" name="ss_resource" value="SSS000030311" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSS000030311-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.S000030311" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </instance_attributes>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <operations >
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSS000030311-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSS000030311-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSS000030311-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSS000030311-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </operations>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       </primitive>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     </resources>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   </configuration>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - </cib>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="30" admin_epoch="0" num_updates="1" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <master id="ms-SSS000030311" __crm_diff_marker__="added:top" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <meta_attributes id="ms-SSS000030311-meta_attributes" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSS000030311-meta_attributes-clone-max" name="clone-max" value="2" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSS000030311-meta_attributes-notify" name="notify" value="true" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSS000030311-meta_attributes-globally-unique" name="globally-unique" value="false" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSS000030311-meta_attributes-target-role" name="target-role" value="Started" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </meta_attributes>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <primitive class="ocf" id="SSS000030311" provider="omneon" type="ss" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <instance_attributes id="SSS000030311-instance_attributes" >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSS000030311-instance_attributes-ss_resource" name="ss_resource" value="SSS000030311" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSS000030311-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.S000030311" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </instance_attributes>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <operations >
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSS000030311-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSS000030311-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSS000030311-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSS000030311-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </operations>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </primitive>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </master>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.30.1): ok (rc=0)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [17429]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-13.raw
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:06 mgraid-S000030311-1 cib: [17429]: info: write_cib_contents: Wrote version 0.30.0 of the CIB to disk (digest: f3e3aac7c89f87102cec30809240785b)
Apr 14 20:35:06 mgraid-S000030311-1 cib: [17429]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.howZkF (digest: /var/lib/heartbeat/crm/cib.61cHze)
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 70 for probe_complete=true passed
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17429 exited with return code 0.
Apr 14 20:35:06 mgraid-S000030311-1 heartbeat: [16608]: WARN: 1 lost packet(s) for [mgraid-s000030311-0] [195:197]
Apr 14 20:35:06 mgraid-S000030311-1 heartbeat: [16608]: info: No pkts missing from mgraid-s000030311-0!
Apr 14 20:35:06 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: WARN: cib_process_replace: Replacement 0.18.1 not applied to 0.30.1: current epoch is greater than the replacement
Apr 14 20:35:06 mgraid-S000030311-1 cib: [16631]: WARN: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=mgraid-s000030311-0/crm_shadow/2, version=0.30.1): Update was older than existing configuration (rc=-45)
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:06 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/44, version=0.30.1): ok (rc=0)
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/46, version=0.30.1): ok (rc=0)
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.32.1 from <null>
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="30" admin_epoch="0" num_updates="1" />
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="32" admin_epoch="0" num_updates="1" >
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_order first="cloneIcms" id="orderms-SSS000030311" score="0" then="ms-SSS000030311" __crm_diff_marker__="added:top" />
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.32.1): ok (rc=0)
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:07 mgraid-S000030311-1 cib: [17458]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-14.raw
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:07 mgraid-S000030311-1 cib: [17458]: info: write_cib_contents: Wrote version 0.32.0 of the CIB to disk (digest: dcc2c8d3875c1015fe929abf41d84216)
Apr 14 20:35:07 mgraid-S000030311-1 cib: [17458]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.zXeYgS (digest: /var/lib/heartbeat/crm/cib.vHYsos)
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 78 for probe_complete=true passed
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17458 exited with return code 0.
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/48, version=0.32.1): ok (rc=0)
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:07 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:07 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:07 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/50, version=0.32.1): ok (rc=0)
Apr 14 20:35:08 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17480) removed from ccm
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 5
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=42
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 6
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=42
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 7
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=42
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 8
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=42
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 9
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=42
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 10
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=42
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 11
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=42
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 12
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=42
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 13
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=42
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 14
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=42
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/52, version=0.32.1): ok (rc=0)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 14 (current: 14, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:08 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=mgraid-s000030311-0/crm_shadow/2, version=0.32.1): ok (rc=0)
Apr 14 20:35:09 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17482) removed from ccm
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 14 (current: 14, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: The transitioner is already active
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:35:09 mgraid-S000030311-1 crmd: [16635]: WARN: start_subsystem: Client pengine already running as pid 16986
Apr 14 20:35:10 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17495) removed from ccm
Apr 14 20:35:11 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17497) removed from ccm
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.34.1 from <null>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="32" admin_epoch="0" num_updates="1" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="34" admin_epoch="0" num_updates="1" >
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_location id="ms-SSS000030311-master-w1" rsc="ms-SSS000030311" __crm_diff_marker__="added:top" >
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <rule id="ms-SSS000030311-master-w1-rule" role="master" score="100" >
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <expression attribute="#uname" id="ms-SSS000030311-master-w1-expression" operation="eq" value="mgraid-s000030311-0" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </rule>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </rsc_location>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.34.1): ok (rc=0)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [17525]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-15.raw
Apr 14 20:35:11 mgraid-S000030311-1 cib: [17525]: info: write_cib_contents: Wrote version 0.34.0 of the CIB to disk (digest: 65fb6c27f884b422a5f365496ab5b337)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [17525]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.CQC0fr (digest: /var/lib/heartbeat/crm/cib.ry1ZDc)
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 84 for probe_complete=true passed
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17525 exited with return code 0.
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:11 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17627] registered
Apr 14 20:35:11 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17627] disconnected.
Apr 14 20:35:11 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17627] is unregistered
Apr 14 20:35:11 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17628] registered
Apr 14 20:35:11 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17628] disconnected.
Apr 14 20:35:11 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17628] is unregistered
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.36.1 from <null>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="34" admin_epoch="0" num_updates="1" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="36" admin_epoch="0" num_updates="1" >
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <primitive class="ocf" id="SSJ000030313" provider="omneon" type="ss" __crm_diff_marker__="added:top" >
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <instance_attributes id="SSJ000030313-instance_attributes" >
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSJ000030313-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030313" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSJ000030313-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030313" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </instance_attributes>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <operations >
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030313-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030313-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030313-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030313-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </operations>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </primitive>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.36.1): ok (rc=0)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [17644]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-16.raw
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:11 mgraid-S000030311-1 cib: [17644]: info: write_cib_contents: Wrote version 0.36.0 of the CIB to disk (digest: 76cd5978b657fa8c87a046827728270c)
Apr 14 20:35:11 mgraid-S000030311-1 cib: [17644]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.n6Xm2Z (digest: /var/lib/heartbeat/crm/cib.hopFDM)
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 90 for probe_complete=true passed
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17644 exited with return code 0.
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:11 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.39.1 from <null>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="36" admin_epoch="0" num_updates="1" >
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   <configuration >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     <resources >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       <primitive class="ocf" id="SSJ000030313" provider="omneon" type="ss" __crm_diff_marker__="removed:top" >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <instance_attributes id="SSJ000030313-instance_attributes" >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSJ000030313-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030313" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSJ000030313-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030313" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </instance_attributes>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <operations >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030313-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030313-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030313-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030313-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </operations>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       </primitive>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     </resources>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   </configuration>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - </cib>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="39" admin_epoch="0" num_updates="1" >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <master id="ms-SSJ000030313" __crm_diff_marker__="added:top" >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <meta_attributes id="ms-SSJ000030313-meta_attributes" >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030313-meta_attributes-clone-max" name="clone-max" value="2" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030313-meta_attributes-notify" name="notify" value="true" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030313-meta_attributes-globally-unique" name="globally-unique" value="false" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030313-meta_attributes-target-role" name="target-role" value="Started" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </meta_attributes>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <primitive class="ocf" id="SSJ000030313" provider="omneon" type="ss" >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <instance_attributes id="SSJ000030313-instance_attributes" >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSJ000030313-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030313" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSJ000030313-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030313" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </instance_attributes>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <operations >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030313-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030313-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030313-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030313-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </operations>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </primitive>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </master>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.39.1): ok (rc=0)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [17674]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-17.raw
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [17674]: info: write_cib_contents: Wrote version 0.39.0 of the CIB to disk (digest: da876f7f86976d44771d668591122ec8)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [17674]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.MJMvV9 (digest: /var/lib/heartbeat/crm/cib.wxE7iX)
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 96 for probe_complete=true passed
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17674 exited with return code 0.
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.41.1 from <null>
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="39" admin_epoch="0" num_updates="1" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="41" admin_epoch="0" num_updates="1" >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_order first="cloneIcms" id="orderms-SSJ000030313" score="0" then="ms-SSJ000030313" __crm_diff_marker__="added:top" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.41.1): ok (rc=0)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:12 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 102 for probe_complete=true passed
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:12 mgraid-S000030311-1 cib: [17705]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-18.raw
Apr 14 20:35:12 mgraid-S000030311-1 cib: [17705]: info: write_cib_contents: Wrote version 0.41.0 of the CIB to disk (digest: ba500d4d99bb6fe672336b52d4651d7c)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [17705]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.56i17v (digest: /var/lib/heartbeat/crm/cib.C5s9hk)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17705 exited with return code 0.
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=65
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/O mode
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/crmd/54, version=0.41.1): ok (rc=0)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/55, version=0.41.1): ok (rc=0)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/56, version=0.41.1): ok (rc=0)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/58, version=0.41.1): ok (rc=0)
Apr 14 20:35:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" />
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: initialize_join: join-2: Initializing join data (flag=true)
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-2: Sending offer to mgraid-s000030311-0
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-2: Sending offer to mgraid-s000030311-1
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Removed input: 0000000000020000 (R_HAVE_CIB)
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:12 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:13 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/60, version=0.41.1): ok (rc=0)
Apr 14 20:35:13 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:13 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17725) removed from ccm
Apr 14 20:35:13 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:13 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:13 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:13 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:13 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:13 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/62, version=0.41.1): ok (rc=0)
Apr 14 20:35:14 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:14 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:14 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:14 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:14 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:14 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:14 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/64, version=0.41.2): ok (rc=0)
Apr 14 20:35:14 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17732) removed from ccm
Apr 14 20:35:14 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/66, version=0.41.2): ok (rc=0)
Apr 14 20:35:15 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17734) removed from ccm
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Replaced: 0.41.2 -> 0.43.1 from mgraid-s000030311-0
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="41" num_updates="2" admin_epoch="0" />
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="43" num_updates="1" admin_epoch="0" >
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <primitive class="ocf" id="SSJ000030316" provider="omneon" type="ss" __crm_diff_marker__="added:top" >
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <instance_attributes id="SSJ000030316-instance_attributes" >
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSJ000030316-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030316" />
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSJ000030316-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030316" />
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </instance_attributes>
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <operations >
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030316-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030316-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030316-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030316-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </operations>
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </primitive>
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=mgraid-s000030311-0/crm_shadow/2, version=0.43.1): ok (rc=0)
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 108 for probe_complete=true passed
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:15 mgraid-S000030311-1 cib: [17748]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-19.raw
Apr 14 20:35:15 mgraid-S000030311-1 cib: [17748]: info: write_cib_contents: Wrote version 0.43.0 of the CIB to disk (digest: 716d63d152f83d8f49aacd573cf04a4a)
Apr 14 20:35:15 mgraid-S000030311-1 cib: [17748]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.pcddVd (digest: /var/lib/heartbeat/crm/cib.N424cb)
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17748 exited with return code 0.
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:15 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 61 : Parsing CIB options
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:15 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:15 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/68, version=0.43.1): ok (rc=0)
Apr 14 20:35:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:16 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17747) removed from ccm
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 15
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=78
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 16
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=78
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 17
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=78
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 18
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=78
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 19
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=78
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_OFFER: join-2
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_ELECTION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_WARN  
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: WARN: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 20
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=78
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_ELECTION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_WARN  
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: WARN: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
Apr 14 20:35:16 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/70, version=0.43.1): ok (rc=0)
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 20 (current: 20, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:16 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:17 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17757) removed from ccm
Apr 14 20:35:17 mgraid-S000030311-1 heartbeat: [16608]: WARN: 1 lost packet(s) for [mgraid-s000030311-0] [246:248]
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:17 mgraid-S000030311-1 heartbeat: [16608]: info: No pkts missing from mgraid-s000030311-0!
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 20 (current: 20, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: The transitioner is already active
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:35:17 mgraid-S000030311-1 crmd: [16635]: WARN: start_subsystem: Client pengine already running as pid 16986
Apr 14 20:35:17 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.46.1 from mgraid-s000030311-0
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="43" admin_epoch="0" num_updates="1" >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   <configuration >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     <resources >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       <primitive class="ocf" id="SSJ000030316" provider="omneon" type="ss" __crm_diff_marker__="removed:top" >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <instance_attributes id="SSJ000030316-instance_attributes" >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSJ000030316-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030316" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSJ000030316-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030316" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </instance_attributes>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <operations >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030316-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030316-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030316-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030316-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </operations>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       </primitive>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     </resources>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   </configuration>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - </cib>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="46" admin_epoch="0" num_updates="1" >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <master id="ms-SSJ000030316" __crm_diff_marker__="added:top" >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <meta_attributes id="ms-SSJ000030316-meta_attributes" >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030316-meta_attributes-clone-max" name="clone-max" value="2" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030316-meta_attributes-notify" name="notify" value="true" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030316-meta_attributes-globally-unique" name="globally-unique" value="false" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030316-meta_attributes-target-role" name="target-role" value="Started" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </meta_attributes>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <primitive class="ocf" id="SSJ000030316" provider="omneon" type="ss" >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <instance_attributes id="SSJ000030316-instance_attributes" >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSJ000030316-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030316" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSJ000030316-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030316" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </instance_attributes>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <operations >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030316-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030316-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030316-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030316-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </operations>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </primitive>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </master>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=mgraid-s000030311-0/crm_shadow/2, version=0.46.1): ok (rc=0)
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [17760]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-20.raw
Apr 14 20:35:18 mgraid-S000030311-1 cib: [17760]: info: write_cib_contents: Wrote version 0.46.0 of the CIB to disk (digest: b8b5618e8a8340a615042963706e9f5e)
Apr 14 20:35:18 mgraid-S000030311-1 cib: [17760]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Qy3YJe (digest: /var/lib/heartbeat/crm/cib.GrJMmh)
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 114 for probe_complete=true passed
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17760 exited with return code 0.
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:18 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17759) removed from ccm
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.48.1 from <null>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="46" admin_epoch="0" num_updates="1" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="48" admin_epoch="0" num_updates="1" >
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_location id="ms-SSJ000030313-master-w1" rsc="ms-SSJ000030313" __crm_diff_marker__="added:top" >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <rule id="ms-SSJ000030313-master-w1-rule" role="master" score="100" >
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <expression attribute="#uname" id="ms-SSJ000030313-master-w1-expression" operation="eq" value="mgraid-s000030311-1" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </rule>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </rsc_location>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.48.1): ok (rc=0)
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:18 mgraid-S000030311-1 cib: [17789]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-21.raw
Apr 14 20:35:18 mgraid-S000030311-1 cib: [17789]: info: write_cib_contents: Wrote version 0.48.0 of the CIB to disk (digest: 3d08c53ed8d645f31ce60cbbd36cb9a6)
Apr 14 20:35:18 mgraid-S000030311-1 cib: [17789]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.nn1Ysf (digest: /var/lib/heartbeat/crm/cib.SGRQjk)
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 120 for probe_complete=true passed
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17789 exited with return code 0.
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:18 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:18 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:19 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17890] registered
Apr 14 20:35:19 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17890] disconnected.
Apr 14 20:35:19 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17890] is unregistered
Apr 14 20:35:19 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [17891] registered
Apr 14 20:35:19 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:17891] disconnected.
Apr 14 20:35:19 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:17891] is unregistered
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.50.1 from <null>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="48" admin_epoch="0" num_updates="1" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="50" admin_epoch="0" num_updates="1" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <primitive class="ocf" id="SSJ000030312" provider="omneon" type="ss" __crm_diff_marker__="added:top" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <instance_attributes id="SSJ000030312-instance_attributes" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSJ000030312-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030312" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSJ000030312-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030312" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </instance_attributes>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <operations >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030312-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030312-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030312-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030312-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </operations>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </primitive>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.50.1): ok (rc=0)
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:19 mgraid-S000030311-1 cib: [17907]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-22.raw
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [17907]: info: write_cib_contents: Wrote version 0.50.0 of the CIB to disk (digest: 8756f37f5f23804c4a338166d44106d3)
Apr 14 20:35:19 mgraid-S000030311-1 cib: [17907]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Ic4mrJ (digest: /var/lib/heartbeat/crm/cib.vsfQKP)
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 126 for probe_complete=true passed
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17907 exited with return code 0.
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.53.1 from <null>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="50" admin_epoch="0" num_updates="1" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   <configuration >
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     <resources >
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       <primitive class="ocf" id="SSJ000030312" provider="omneon" type="ss" __crm_diff_marker__="removed:top" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <instance_attributes id="SSJ000030312-instance_attributes" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSJ000030312-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030312" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSJ000030312-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030312" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </instance_attributes>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <operations >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030312-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030312-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030312-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030312-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </operations>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       </primitive>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     </resources>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   </configuration>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - </cib>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="53" admin_epoch="0" num_updates="1" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <master id="ms-SSJ000030312" __crm_diff_marker__="added:top" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <meta_attributes id="ms-SSJ000030312-meta_attributes" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030312-meta_attributes-clone-max" name="clone-max" value="2" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030312-meta_attributes-notify" name="notify" value="true" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030312-meta_attributes-globally-unique" name="globally-unique" value="false" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030312-meta_attributes-target-role" name="target-role" value="Started" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </meta_attributes>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <primitive class="ocf" id="SSJ000030312" provider="omneon" type="ss" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <instance_attributes id="SSJ000030312-instance_attributes" >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSJ000030312-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030312" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSJ000030312-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030312" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </instance_attributes>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <operations >
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030312-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030312-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030312-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030312-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </operations>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </primitive>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </master>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.53.1): ok (rc=0)
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:19 mgraid-S000030311-1 cib: [17938]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-23.raw
Apr 14 20:35:19 mgraid-S000030311-1 cib: [17938]: info: write_cib_contents: Wrote version 0.53.0 of the CIB to disk (digest: 9059847675893fda6d2191659f38f215)
Apr 14 20:35:19 mgraid-S000030311-1 cib: [17938]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.tCRDta (digest: /var/lib/heartbeat/crm/cib.hOHkKh)
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 132 for probe_complete=true passed
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17938 exited with return code 0.
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:19 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:19 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.55.1 from <null>
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="53" admin_epoch="0" num_updates="1" />
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="55" admin_epoch="0" num_updates="1" >
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_order first="cloneIcms" id="orderms-SSJ000030312" score="0" then="ms-SSJ000030312" __crm_diff_marker__="added:top" />
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.55.1): ok (rc=0)
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 138 for probe_complete=true passed
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:20 mgraid-S000030311-1 cib: [17968]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-24.raw
Apr 14 20:35:20 mgraid-S000030311-1 cib: [17968]: info: write_cib_contents: Wrote version 0.55.0 of the CIB to disk (digest: 448867dcc31e411d4b030ed70e7252fe)
Apr 14 20:35:20 mgraid-S000030311-1 cib: [17968]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.sPIVvq (digest: /var/lib/heartbeat/crm/cib.W8iIGy)
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 17968 exited with return code 0.
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:20 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: WARN: cib_process_replace: Replacement 0.51.1 not applied to 0.55.1: current epoch is greater than the replacement
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: WARN: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=mgraid-s000030311-0/crm_shadow/2, version=0.55.1): Update was older than existing configuration (rc=-45)
Apr 14 20:35:20 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:35:20 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:20 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:35:20 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=80
Apr 14 20:35:20 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:20 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:35:20 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/O mode
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/crmd/72, version=0.55.1): ok (rc=0)
Apr 14 20:35:20 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:35:21 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/73, version=0.55.1): ok (rc=0)
Apr 14 20:35:21 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/74, version=0.55.1): ok (rc=0)
Apr 14 20:35:21 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:35:21 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/76, version=0.55.1): ok (rc=0)
Apr 14 20:35:21 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" />
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: initialize_join: join-3: Initializing join data (flag=true)
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-3: Sending offer to mgraid-s000030311-0
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-3: Sending offer to mgraid-s000030311-1
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:21 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/78, version=0.55.1): ok (rc=0)
Apr 14 20:35:21 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17988) removed from ccm
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:21 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:21 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/80, version=0.55.1): ok (rc=0)
Apr 14 20:35:22 mgraid-S000030311-1 ccm: [16630]: info: client (pid=17995) removed from ccm
Apr 14 20:35:22 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:22 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:22 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:22 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:22 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:22 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:22 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/82, version=0.55.1): ok (rc=0)
Apr 14 20:35:22 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:23 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18002) removed from ccm
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:23 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/84, version=0.55.1): ok (rc=0)
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:23 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:23 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/86, version=0.55.1): ok (rc=0)
Apr 14 20:35:24 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18009) removed from ccm
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 79 : Parsing CIB options
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 21
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=93
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 22
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=93
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 23
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=93
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 24
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=93
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 25
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=93
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_OFFER: join-3
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_ELECTION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_WARN  
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: WARN: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 26
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=93
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_ELECTION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_WARN  
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: WARN: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
Apr 14 20:35:24 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/88, version=0.55.1): ok (rc=0)
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 26 (current: 26, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:24 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:25 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18011) removed from ccm
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 26 (current: 26, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: The transitioner is already active
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:35:25 mgraid-S000030311-1 crmd: [16635]: WARN: start_subsystem: Client pengine already running as pid 16986
Apr 14 20:35:26 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18024) removed from ccm
Apr 14 20:35:27 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18033) removed from ccm
Apr 14 20:35:28 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18042) removed from ccm
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=95
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:35:28 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/O mode
Apr 14 20:35:28 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/crmd/90, version=0.55.1): ok (rc=0)
Apr 14 20:35:28 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:35:28 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/91, version=0.55.1): ok (rc=0)
Apr 14 20:35:28 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/92, version=0.55.1): ok (rc=0)
Apr 14 20:35:28 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:35:28 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/94, version=0.55.1): ok (rc=0)
Apr 14 20:35:28 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" />
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: initialize_join: join-4: Initializing join data (flag=true)
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-4: Sending offer to mgraid-s000030311-0
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-4: Sending offer to mgraid-s000030311-1
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_offer_all: join-4: Waiting on 2 outstanding join acks
Apr 14 20:35:28 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/96, version=0.55.1): ok (rc=0)
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 97 : Parsing CIB options
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_OFFER: join-4
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: info: update_dc: Set DC to mgraid-s000030311-1 (3.0.1)
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: join_query_callback: Respond to join offer join-4
Apr 14 20:35:28 mgraid-S000030311-1 crmd: [16635]: debug: join_query_callback: Acknowledging mgraid-s000030311-1 as our DC
Apr 14 20:35:29 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18050) removed from ccm
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: Processing req from mgraid-s000030311-0
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-4: Welcoming node mgraid-s000030311-0 (ref join_request-crmd-1302838529-32)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-4
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-4: Still waiting on 1 outstanding offers
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: Processing req from mgraid-s000030311-1
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: mgraid-s000030311-1 has a better generation number than the current max mgraid-s000030311-0
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: do_dc_join_filter_offer: Max generation <generation_tuple validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="1" dc-uuid="856c1f72-7cd1-4906-8183-8be87eef96f2" epoch="55" admin_epoch="0" num_updates="1" />
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: do_dc_join_filter_offer: Their generation <generation_tuple epoch="55" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="1" dc-uuid="856c1f72-7cd1-4906-8183-8be87eef96f2" cib-last-written="Thu Apr 14 20:35:20 2011" />
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-4: Welcoming node mgraid-s000030311-1 (ref join_request-crmd-1302838528-44)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-4
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-4: Integration of 2 peers complete: do_dc_join_filter_offer
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=99
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_finalize: Finializing join-4 for 2 clients
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_finalize: join-4: Syncing the CIB from mgraid-s000030311-1 to the rest of the cluster
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000020000 (R_HAVE_CIB)
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: debug: sync_our_cib: Syncing CIB to all peers
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/99, version=0.55.1): ok (rc=0)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-4: Still waiting on 2 integrated nodes
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: finalize_sync_callback: Notifying 2 clients of join-4 results
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: finalize_join_for: join-4: ACK'ing join request from mgraid-s000030311-0, state member
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: finalize_join_for: join-4: ACK'ing join request from mgraid-s000030311-1, state member
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/100, version=0.55.1): ok (rc=0)
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/101, version=0.55.1): ok (rc=0)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_RESULT: join-4
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_finalize_respond: Confirming join join-4: join_ack_nack
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_finalize_respond: join-4: Join complete.  Sending local LRM status to mgraid-s000030311-1
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from mgraid-s000030311-1
Apr 14 20:35:29 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_ack: join-4: Updating node state to member for mgraid-s000030311-0
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: join-4: Registered callback for LRM update 103
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_ack: join-4: Updating node state to member for mgraid-s000030311-1
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: join-4: Registered callback for LRM update 105
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='mgraid-s000030311-0']/lrm (/cib/status/node_state[1]/lrm)
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='mgraid-s000030311-0']/lrm (origin=local/crmd/102, version=0.55.2): ok (rc=0)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: erase_xpath_callback: Deletion of "//node_state[@uname='mgraid-s000030311-0']/lrm": ok (rc=0)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: join_update_complete_callback: Join update 103 complete
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-4 complete: join_update_complete_callback
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='mgraid-s000030311-1']/lrm (/cib/status/node_state[2]/lrm)
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='mgraid-s000030311-1']/lrm (origin=local/crmd/104, version=0.55.4): ok (rc=0)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: attrd_update: Sent update: (null)=(null) for localhost
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: crm_update_quorum: Updating quorum status to true (call=108)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: inactive
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=1) : Peer Cancelled
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: do_pe_invoke: Query 109: Requesting the current CIB: S_POLICY_ENGINE
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_delete): 0.55.3 -> 0.55.4 (S_POLICY_ENGINE)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: info: erase_xpath_callback: Deletion of "//node_state[@uname='mgraid-s000030311-1']/lrm": ok (rc=0)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.55.4 -> 0.55.5 (S_POLICY_ENGINE)
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: join_update_complete_callback: Join update 105 complete
Apr 14 20:35:29 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/106, version=0.55.5): ok (rc=0)
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:29 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:29 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:29 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/108, version=0.55.5): ok (rc=0)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_pe_invoke_callback: Invoking the PE: query=109, ref=pe_calc-dc-1302838530-48, seq=2, quorate=1
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'stop' for cluster option 'no-quorum-policy'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 144 for probe_complete=true passed
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '20s' for cluster option 'default-action-timeout'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: STONITH timeout: 60000
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: STONITH of failed nodes is enabled
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Stop all active resources: false
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Default stickiness: 0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: info: determine_online_status: Node mgraid-s000030311-0 is online
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: info: determine_online_status: Node mgraid-s000030311-1 is online
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Clone Set: Fencing
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ mgraid-stonith:0 mgraid-stonith:1 ]
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Clone Set: cloneIcms
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ icms:0 icms:1 ]
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Clone Set: cloneOmserver
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ omserver:0 omserver:1 ]
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Master/Slave Set: ms-SSS000030311
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ SSS000030311:0 SSS000030311:1 ]
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Master/Slave Set: ms-SSJ000030313
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ SSJ000030313:0 SSJ000030313:1 ]
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Master/Slave Set: ms-SSJ000030316
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ SSJ000030316:0 SSJ000030316:1 ]
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Master/Slave Set: ms-SSJ000030312
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ SSJ000030312:0 SSJ000030312:1 ]
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSS000030311-master-w1-rule) is not active (role : Master)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSS000030311-master-w1-rule) is not active (role : Master)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSS000030311-master-w1-rule) is not active (role : Master)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030313-master-w1-rule) is not active (role : Master)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030313-master-w1-rule) is not active (role : Master)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030313-master-w1-rule) is not active (role : Master)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to mgraid-stonith:0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to mgraid-stonith:1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 Fencing instances of a possible 2
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to icms:0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to icms:1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 cloneIcms instances of a possible 2
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to omserver:0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to omserver:1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 cloneOmserver instances of a possible 2
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to SSS000030311:0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to SSS000030311:1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 ms-SSS000030311 instances of a possible 2
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSS000030311:0 master score: 99
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: info: master_color: Promoting SSS000030311:0 (Stopped mgraid-s000030311-0)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSS000030311:1 master score: -1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: info: master_color: ms-SSS000030311: Promoted 1 instances of a possible 1 to master
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to SSJ000030313:0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to SSJ000030313:1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 ms-SSJ000030313 instances of a possible 2
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030313:1 master score: 99
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: info: master_color: Promoting SSJ000030313:1 (Stopped mgraid-s000030311-1)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030313:0 master score: -1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: info: master_color: ms-SSJ000030313: Promoted 1 instances of a possible 1 to master
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to SSJ000030316:0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to SSJ000030316:1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 ms-SSJ000030316 instances of a possible 2
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030316:0 master score: -1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030316:1 master score: -1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: info: master_color: ms-SSJ000030316: Promoted 0 instances of a possible 1 to master
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to SSJ000030312:0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to SSJ000030312:1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 ms-SSJ000030312 instances of a possible 2
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030312:0 master score: -1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030312:1 master score: -1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: info: master_color: ms-SSJ000030312: Promoted 0 instances of a possible 1 to master
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing mgraid-stonith:0 on mgraid-s000030311-0 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing icms:0 on mgraid-s000030311-0 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing omserver:0 on mgraid-s000030311-0 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing SSS000030311:0 on mgraid-s000030311-0 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing SSJ000030313:0 on mgraid-s000030311-0 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing SSJ000030316:0 on mgraid-s000030311-0 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing SSJ000030312:0 on mgraid-s000030311-0 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing mgraid-stonith:1 on mgraid-s000030311-1 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing icms:1 on mgraid-s000030311-1 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing omserver:1 on mgraid-s000030311-1 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing SSS000030311:1 on mgraid-s000030311-1 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing SSJ000030313:1 on mgraid-s000030311-1 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing SSJ000030316:1 on mgraid-s000030311-1 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing SSJ000030312:1 on mgraid-s000030311-1 (Stopped)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: Fencing has no active children
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (5s) for icms:0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (5s) for icms:1 on mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: cloneIcms has no active children
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (5s) for omserver:0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (5s) for omserver:1 on mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: cloneOmserver has no active children
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_create_actions: Creating actions for ms-SSS000030311
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSS000030311:0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSS000030311:1 on mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: ms-SSS000030311 has no active children
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSS000030311:0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSS000030311:1 on mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_create_actions: Creating actions for ms-SSJ000030313
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030313:0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSJ000030313:1 on mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: ms-SSJ000030313 has no active children
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030313:0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSJ000030313:1 on mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_create_actions: Creating actions for ms-SSJ000030316
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030316:0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030316:1 on mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: ms-SSJ000030316 has no active children
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 148 for probe_complete=true passed
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030316:0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030316:1 on mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: master_create_actions: Creating actions for ms-SSJ000030312
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030312:0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030312:1 on mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: ms-SSJ000030312 has no active children
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030312:0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030312:1 on mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start mgraid-stonith:0	(mgraid-s000030311-0)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start mgraid-stonith:1	(mgraid-s000030311-1)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start icms:0	(mgraid-s000030311-0)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start icms:1	(mgraid-s000030311-1)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start omserver:0	(mgraid-s000030311-0)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start omserver:1	(mgraid-s000030311-1)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSS000030311:0	(mgraid-s000030311-0)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Promote SSS000030311:0	(Stopped -> Master mgraid-s000030311-0)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSS000030311:1	(mgraid-s000030311-1)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030313:0	(mgraid-s000030311-0)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030313:1	(mgraid-s000030311-1)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Promote SSJ000030313:1	(Stopped -> Master mgraid-s000030311-1)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030316:0	(mgraid-s000030311-0)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030316:1	(mgraid-s000030311-1)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030312:0	(mgraid-s000030311-0)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030312:1	(mgraid-s000030311-1)
Apr 14 20:35:30 mgraid-S000030311-1 pengine: [16986]: info: process_pe_message: Transition 2: PEngine Input stored in: /var/lib/pengine/pe-input-5.bz2
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: unpack_graph: Unpacked transition 2: 110 actions in 110 synapses
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1302838530-48) derived from /var/lib/pengine/pe-input-5.bz2
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 4: monitor mgraid-stonith:0_monitor_0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 12: monitor mgraid-stonith:1_monitor_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_add_rsc:client [16635] adds resource mgraid-stonith:1
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=12:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592 op=mgraid-stonith:1_monitor_0 )
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc mgraid-stonith:1
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation monitor[2] on stonith::external/mgpstonith::mgraid-stonith:1 for client 16635, its parameters: CRM_meta_clone=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[false] crm_feature_set=[3.0.1] CRM_meta_globally_unique=[false] hostlist=[mgraid-canister] CRM_meta_name=[monitor] CRM_meta_timeout=[20000]  to the operation list.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: info: rsc:mgraid-stonith:1:2: monitor
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 5: monitor icms:0_monitor_0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [18057]: debug: stonithd_signon: creating connection
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 13: monitor icms:1_monitor_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [18057]: debug: sending out the signon msg.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_add_rsc:client [16635] adds resource icms:1
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Apr 14 20:35:30 mgraid-S000030311-1 stonithd: [16633]: debug: client STONITH_RA_EXEC_18057 (pid=18057) succeeded to signon to stonithd.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [18057]: debug: signed on to stonithd.
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=13:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592 op=icms:1_monitor_0 )
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc icms:1
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation monitor[3] on lsb::S53icms::icms:1 for client 16635, its parameters: CRM_meta_clone_max=[2] crm_feature_set=[3.0.1] CRM_meta_timeout=[20000] CRM_meta_clone_node_max=[1] CRM_meta_notify=[false] CRM_meta_clone=[1] CRM_meta_globally_unique=[false]  to the operation list.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: info: rsc:icms:1:3: monitor
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [18057]: debug: waiting for the stonithRA reply msg.
Apr 14 20:35:30 mgraid-S000030311-1 stonithd: [16633]: debug: client STONITH_RA_EXEC_18057 [pid: 18057] requests a resource operation monitor on mgraid-stonith:1 (external/mgpstonith)
Apr 14 20:35:30 mgraid-S000030311-1 stonithd: [16633]: debug: stonithRA_monitor: mgraid-stonith:1 is not started.
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 6: monitor omserver:0_monitor_0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [18057]: debug: a stonith RA operation queue to run, call_id=18059.
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 14: monitor omserver:1_monitor_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [18057]: debug: stonithd_receive_ops_result: begin
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_add_rsc:client [16635] adds resource omserver:1
Apr 14 20:35:30 mgraid-S000030311-1 stonithd: [16633]: debug: Child process unknown_mgraid-stonith:1_monitor [18059] exited, its exit code: 7 when signo=0.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Apr 14 20:35:30 mgraid-S000030311-1 stonithd: [16633]: debug: mgraid-stonith:1's (external/mgpstonith) op monitor finished. op_result=7
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=14:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592 op=omserver:1_monitor_0 )
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc omserver:1
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation monitor[4] on lsb::S49omserver::omserver:1 for client 16635, its parameters: CRM_meta_clone_max=[2] crm_feature_set=[3.0.1] CRM_meta_timeout=[20000] CRM_meta_clone_node_max=[1] CRM_meta_notify=[false] CRM_meta_clone=[1] CRM_meta_globally_unique=[false]  to the operation list.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: info: rsc:omserver:1:4: monitor
Apr 14 20:35:30 mgraid-S000030311-1 stonithd: [16633]: debug: client STONITH_RA_EXEC_18057 (pid=18057) signed off
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: WARN: Managed mgraid-stonith:1:monitor process 18057 exited with return code 7.
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 7: monitor SSS000030311:0_monitor_0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 15: monitor SSS000030311:1_monitor_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_add_rsc:client [16635] adds resource SSS000030311:1
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=15:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592 op=SSS000030311:1_monitor_0 )
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc SSS000030311:1
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation monitor[5] on ocf::ss::SSS000030311:1 for client 16635, its parameters: CRM_meta_clone=[1] ssconf=[/var/omneon/config/config.S000030311] CRM_meta_master_node_max=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_master_max=[1] CRM_meta_globally_unique=[false] crm_feature_set=[3.0.1] ss_resource=[SSS000030311] CRM_meta_timeout=[20000]  to the operation list.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: info: rsc:SSS000030311:1:5: monitor
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 49 fired and confirmed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 8: monitor SSJ000030313:0_monitor_0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 16: monitor SSJ000030313:1_monitor_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_add_rsc:client [16635] adds resource SSJ000030313:1
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=16:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592 op=SSJ000030313:1_monitor_0 )
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc SSJ000030313:1
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation monitor[6] on ocf::ss::SSJ000030313:1 for client 16635, its parameters: CRM_meta_clone=[1] ssconf=[/var/omneon/config/config.J000030313] CRM_meta_master_node_max=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_master_max=[1] CRM_meta_globally_unique=[false] crm_feature_set=[3.0.1] ss_resource=[SSJ000030313] CRM_meta_timeout=[20000]  to the operation list.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: info: rsc:SSJ000030313:1:6: monitor
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 77 fired and confirmed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 78 fired and confirmed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 9: monitor SSJ000030316:0_monitor_0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 17: monitor SSJ000030316:1_monitor_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_add_rsc:client [16635] adds resource SSJ000030316:1
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=17:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592 op=SSJ000030316:1_monitor_0 )
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: info: Managed icms:1:monitor process 18058 exited with return code 0.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc SSJ000030316:1
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation monitor[7] on ocf::ss::SSJ000030316:1 for client 16635, its parameters: CRM_meta_clone=[1] ssconf=[/var/omneon/config/config.J000030316] CRM_meta_master_node_max=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_master_max=[1] CRM_meta_globally_unique=[false] crm_feature_set=[3.0.1] ss_resource=[SSJ000030316] CRM_meta_timeout=[20000]  to the operation list.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: info: rsc:SSJ000030316:1:7: monitor
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 105 fired and confirmed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 106 fired and confirmed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 10: monitor SSJ000030312:0_monitor_0 on mgraid-s000030311-0
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: info: Managed omserver:1:monitor process 18061 exited with return code 0.
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 18: monitor SSJ000030312:1_monitor_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_add_rsc:client [16635] adds resource SSJ000030312:1
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=18:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592 op=SSJ000030312:1_monitor_0 )
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc SSJ000030312:1
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation monitor[8] on ocf::ss::SSJ000030312:1 for client 16635, its parameters: CRM_meta_clone=[1] ssconf=[/var/omneon/config/config.J000030312] CRM_meta_master_node_max=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_master_max=[1] CRM_meta_globally_unique=[false] crm_feature_set=[3.0.1] ss_resource=[SSJ000030312] CRM_meta_timeout=[20000]  to the operation list.
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: info: rsc:SSJ000030312:1:8: monitor
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 133 fired and confirmed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 134 fired and confirmed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=0, Pending=14, Fired=22, Skipped=0, Incomplete=88, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation mgraid-stonith:1_monitor_0 (call=2, rc=7, cib-update=110, confirmed=true) not running
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation icms:1_monitor_0 (call=3, rc=0, cib-update=111, confirmed=true) ok
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation omserver:1_monitor_0 (call=4, rc=0, cib-update=112, confirmed=true) ok
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=8, Pending=14, Fired=0, Skipped=0, Incomplete=88, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.55.5 -> 0.55.6 (S_TRANSITION_ENGINE)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action mgraid-stonith:1_monitor_0 (12) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=9, Pending=13, Fired=0, Skipped=0, Incomplete=88, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.55.6 -> 0.55.7 (S_TRANSITION_ENGINE)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: WARN: status_from_rc: Action 13 (icms:1_monitor_0) on mgraid-s000030311-1 failed (target: 7 vs. rc: 0): Error
ss[18064]:	2011/04/14_20:35:30 DEBUG: ss_status() START SSS000030311
ss[18066]:	2011/04/14_20:35:30 DEBUG: ss_status() START SSJ000030313
ss[18068]:	2011/04/14_20:35:30 DEBUG: ss_status() START SSJ000030316
ss[18064]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() START SSS000030311
ss[18066]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() START SSJ000030313
ss[18073]:	2011/04/14_20:35:30 DEBUG: ss_status() START SSJ000030312
ss[18068]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() START SSJ000030316
ss[18064]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() ssadm return is 1
ss[18066]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() ssadm return is 1
ss[18073]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() START SSJ000030312
ss[18064]:	2011/04/14_20:35:30 DEBUG: SSS000030311: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.S000030311 -i1
ss[18068]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() ssadm return is 1
ss[18066]:	2011/04/14_20:35:30 DEBUG: SSJ000030313: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030313 -i1
ss[18068]:	2011/04/14_20:35:30 DEBUG: SSJ000030316: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030316 -i1
ss[18073]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() ssadm return is 1
ss[18073]:	2011/04/14_20:35:30 DEBUG: SSJ000030312: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030312 -i1
ss[18064]:	2011/04/14_20:35:30 ERROR: SSS000030311: Called /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.S000030311 -i1
ss[18066]:	2011/04/14_20:35:30 ERROR: SSJ000030313: Called /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030313 -i1
ss[18064]:	2011/04/14_20:35:30 ERROR: SSS000030311: Exit code 1
ss[18068]:	2011/04/14_20:35:30 ERROR: SSJ000030316: Called /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030316 -i1
ss[18073]:	2011/04/14_20:35:30 ERROR: SSJ000030312: Called /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030312 -i1
ss[18066]:	2011/04/14_20:35:30 ERROR: SSJ000030313: Exit code 1
ss[18064]:	2011/04/14_20:35:30 ERROR: SSS000030311: Command output: 
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: RA output: (SSS000030311:1:monitor:stdout) 

ss[18073]:	2011/04/14_20:35:30 ERROR: SSJ000030312: Exit code 1
ss[18068]:	2011/04/14_20:35:30 ERROR: SSJ000030316: Exit code 1
ss[18066]:	2011/04/14_20:35:30 ERROR: SSJ000030313: Command output: 
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: RA output: (SSJ000030313:1:monitor:stdout) 

ss[18064]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() END SSS000030311   - Unconfigured
ss[18066]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() END SSJ000030313   - Unconfigured
ss[18064]:	2011/04/14_20:35:30 DEBUG: ss_status() returning 7
ss[18073]:	2011/04/14_20:35:30 ERROR: SSJ000030312: Command output: 
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: RA output: (SSJ000030312:1:monitor:stdout) 

ss[18068]:	2011/04/14_20:35:30 ERROR: SSJ000030316: Command output: 
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: RA output: (SSJ000030316:1:monitor:stdout) 

ss[18064]:	2011/04/14_20:35:30 DEBUG: SSS000030311: Calling //sbin/crm_master -l reboot -D
ss[18066]:	2011/04/14_20:35:30 DEBUG: ss_status() returning 7
ss[18073]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() END SSJ000030312   - Unconfigured
ss[18068]:	2011/04/14_20:35:30 DEBUG: ss_set_status_variables() END SSJ000030316   - Unconfigured
ss[18073]:	2011/04/14_20:35:30 DEBUG: ss_status() returning 7
ss[18066]:	2011/04/14_20:35:30 DEBUG: SSJ000030313: Calling //sbin/crm_master -l reboot -D
ss[18068]:	2011/04/14_20:35:30 DEBUG: ss_status() returning 7
ss[18073]:	2011/04/14_20:35:30 DEBUG: SSJ000030312: Calling //sbin/crm_master -l reboot -D
ss[18068]:	2011/04/14_20:35:30 DEBUG: SSJ000030316: Calling //sbin/crm_master -l reboot -D
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: info: Invoked: crm_attribute -N mgraid-S000030311-1 -n master-SSS000030311:1 -l reboot -D 
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: debug: log_data_element: query_node_uuid: Result section <nodes >
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: debug: log_data_element: query_node_uuid: Result section   <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" />
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: debug: log_data_element: query_node_uuid: Result section   <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" />
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: debug: log_data_element: query_node_uuid: Result section </nodes>
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: info: determine_host: Mapped mgraid-S000030311-1 to 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: debug: attrd_update: Sent update: master-SSS000030311:1=(null) for mgraid-S000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crm_attribute: master-SSS000030311:1=<null>
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: info: main: Update master-SSS000030311:1=<none> sent via attrd
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for master-SSS000030311:1
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18324]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: info: Invoked: crm_attribute -N mgraid-S000030311-1 -n master-SSJ000030313:1 -l reboot -D 
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: debug: log_data_element: query_node_uuid: Result section <nodes >
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: debug: log_data_element: query_node_uuid: Result section   <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" />
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: debug: log_data_element: query_node_uuid: Result section   <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" />
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: debug: log_data_element: query_node_uuid: Result section </nodes>
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: info: determine_host: Mapped mgraid-S000030311-1 to 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crm_attribute: master-SSJ000030313:1=<null>
ss[18064]:	2011/04/14_20:35:30 DEBUG: SSS000030311: Exit code 0
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for master-SSJ000030313:1
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: info: Invoked: crm_attribute -N mgraid-S000030311-1 -n master-SSJ000030316:1 -l reboot -D 
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: debug: attrd_update: Sent update: master-SSJ000030313:1=(null) for mgraid-S000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: info: main: Update master-SSJ000030313:1=<none> sent via attrd
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18334]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: info: Invoked: crm_attribute -N mgraid-S000030311-1 -n master-SSJ000030312:1 -l reboot -D 
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: debug: log_data_element: query_node_uuid: Result section <nodes >
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: debug: log_data_element: query_node_uuid: Result section   <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" />
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: debug: log_data_element: query_node_uuid: Result section   <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" />
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: debug: log_data_element: query_node_uuid: Result section </nodes>
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: info: determine_host: Mapped mgraid-S000030311-1 to 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: debug: attrd_update: Sent update: master-SSJ000030316:1=(null) for mgraid-S000030311-1
ss[18064]:	2011/04/14_20:35:30 DEBUG: SSS000030311: Command output: 
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crm_attribute: master-SSJ000030316:1=<null>
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: info: main: Update master-SSJ000030316:1=<none> sent via attrd
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: debug: log_data_element: query_node_uuid: Result section <nodes >
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for master-SSJ000030316:1
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: RA output: (SSS000030311:1:monitor:stdout) 

Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18338]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: debug: log_data_element: query_node_uuid: Result section   <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" />
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: debug: log_data_element: query_node_uuid: Result section   <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" />
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: debug: log_data_element: query_node_uuid: Result section </nodes>
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: info: determine_host: Mapped mgraid-S000030311-1 to 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crm_attribute: master-SSJ000030312:1=<null>
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for master-SSJ000030312:1
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: debug: attrd_update: Sent update: master-SSJ000030312:1=(null) for mgraid-S000030311-1
ss[18066]:	2011/04/14_20:35:30 DEBUG: SSJ000030313: Exit code 0
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: info: main: Update master-SSJ000030312:1=<none> sent via attrd
Apr 14 20:35:30 mgraid-S000030311-1 crm_attribute: [18339]: debug: cib_native_signoff: Signing out of the CIB Service
ss[18064]:	2011/04/14_20:35:30 DEBUG: ss_monitor() returning 7
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: WARN: Managed SSS000030311:1:monitor process 18064 exited with return code 7.
ss[18068]:	2011/04/14_20:35:30 DEBUG: SSJ000030316: Exit code 0
ss[18066]:	2011/04/14_20:35:30 DEBUG: SSJ000030313: Command output: 
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: RA output: (SSJ000030313:1:monitor:stdout) 

ss[18073]:	2011/04/14_20:35:30 DEBUG: SSJ000030312: Exit code 0
ss[18068]:	2011/04/14_20:35:30 DEBUG: SSJ000030316: Command output: 
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: RA output: (SSJ000030316:1:monitor:stdout) 

ss[18066]:	2011/04/14_20:35:30 DEBUG: ss_monitor() returning 7
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: WARN: Managed SSJ000030313:1:monitor process 18066 exited with return code 7.
ss[18073]:	2011/04/14_20:35:30 DEBUG: SSJ000030312: Command output: 
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: debug: RA output: (SSJ000030312:1:monitor:stdout) 

ss[18068]:	2011/04/14_20:35:30 DEBUG: ss_monitor() returning 7
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: WARN: Managed SSJ000030316:1:monitor process 18068 exited with return code 7.
ss[18073]:	2011/04/14_20:35:30 DEBUG: ss_monitor() returning 7
Apr 14 20:35:30 mgraid-S000030311-1 lrmd: [16632]: WARN: Managed SSJ000030312:1:monitor process 18073 exited with return code 7.
Apr 14 20:35:30 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18056) removed from ccm
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=icms:1_monitor_0, magic=0:0;13:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.55.7) : Event failed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: update_abort_priority: Abort action done superceeded by restart
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action icms:1_monitor_0 (13) confirmed on mgraid-s000030311-1 (rc=4)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.55.7 -> 0.55.8 (S_TRANSITION_ENGINE)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: WARN: status_from_rc: Action 14 (omserver:1_monitor_0) on mgraid-s000030311-1 failed (target: 7 vs. rc: 0): Error
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=omserver:1_monitor_0, magic=0:0;14:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.55.8) : Event failed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action omserver:1_monitor_0 (14) confirmed on mgraid-s000030311-1 (rc=4)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation SSS000030311:1_monitor_0 (call=5, rc=7, cib-update=113, confirmed=true) not running
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation SSJ000030313:1_monitor_0 (call=6, rc=7, cib-update=114, confirmed=true) not running
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation SSJ000030316:1_monitor_0 (call=7, rc=7, cib-update=115, confirmed=true) not running
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation SSJ000030312:1_monitor_0 (call=8, rc=7, cib-update=116, confirmed=true) not running
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=11, Pending=11, Fired=0, Skipped=53, Incomplete=35, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.55.8 -> 0.55.9 (S_TRANSITION_ENGINE)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSS000030311:1_monitor_0 (15) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=12, Pending=10, Fired=0, Skipped=53, Incomplete=35, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.55.9 -> 0.55.10 (S_TRANSITION_ENGINE)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030313:1_monitor_0 (16) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.55.10 -> 0.55.11 (S_TRANSITION_ENGINE)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030316:1_monitor_0 (17) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=14, Pending=8, Fired=0, Skipped=53, Incomplete=35, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.55.11 -> 0.55.12 (S_TRANSITION_ENGINE)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030312:1_monitor_0 (18) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 11: probe_complete probe_complete on mgraid-s000030311-1 (local) - no waiting
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: attrd_update: Sent update: probe_complete=true for localhost
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=15, Pending=7, Fired=1, Skipped=53, Incomplete=34, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=16, Pending=7, Fired=0, Skipped=53, Incomplete=34, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Replaced: 0.55.12 -> 0.57.1 from <null>
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="55" num_updates="12" admin_epoch="0" />
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="57" num_updates="1" admin_epoch="0" >
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_location id="ms-SSJ000030312-master-w1" rsc="ms-SSJ000030312" __crm_diff_marker__="added:top" >
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <rule id="ms-SSJ000030312-master-w1-rule" role="master" score="100" >
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <expression attribute="#uname" id="ms-SSJ000030312-master-w1-expression" operation="eq" value="mgraid-s000030311-0" />
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </rule>
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </rsc_location>
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.57.1): ok (rc=0)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_replace): 0.55.12 -> 0.57.1 (S_TRANSITION_ENGINE)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: update_abort_priority: Abort priority upgraded from 1 to 1000000
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: update_abort_priority: 'Event failed' abort superceeded
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: need_abort: Aborting on change to epoch
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [18403]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-25.raw
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:30 mgraid-S000030311-1 cib: [18403]: info: write_cib_contents: Wrote version 0.57.0 of the CIB to disk (digest: 7620ddd9109ce3c643be923051786456)
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [18403]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.h4KRyt (digest: /var/lib/heartbeat/crm/cib.MNRPe3)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 157 for probe_complete=true passed
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 18403 exited with return code 0.
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:30 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: info: update_dc: Unset DC mgraid-s000030311-1
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 27
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=132
Apr 14 20:35:30 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=16, Pending=7, Fired=0, Skipped=53, Incomplete=34, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/117, version=0.57.1): ok (rc=0)
Apr 14 20:35:31 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [18505] registered
Apr 14 20:35:31 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:18505] disconnected.
Apr 14 20:35:31 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:18505] is unregistered
Apr 14 20:35:31 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [18506] registered
Apr 14 20:35:31 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:18506] disconnected.
Apr 14 20:35:31 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:18506] is unregistered
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.59.1 from <null>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="57" admin_epoch="0" num_updates="1" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="59" admin_epoch="0" num_updates="1" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <primitive class="ocf" id="SSJ000030314" provider="omneon" type="ss" __crm_diff_marker__="added:top" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <instance_attributes id="SSJ000030314-instance_attributes" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSJ000030314-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030314" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSJ000030314-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030314" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </instance_attributes>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <operations >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030314-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030314-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030314-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030314-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </operations>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </primitive>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_replace): 0.57.1 -> 0.59.1 (S_ELECTION)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.59.1): ok (rc=0)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: need_abort: Aborting on change to epoch
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [18523]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-26.raw
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [18523]: info: write_cib_contents: Wrote version 0.59.0 of the CIB to disk (digest: c588cc89d5b466b7062926e105466597)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [18523]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.6uJYS7 (digest: /var/lib/heartbeat/crm/cib.BjzakJ)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 171 for probe_complete=true passed
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 18523 exited with return code 0.
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.59.1 -> 0.59.2 (S_ELECTION)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action mgraid-stonith:0_monitor_0 (4) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 28
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=132
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=17, Pending=6, Fired=0, Skipped=53, Incomplete=34, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.59.2 -> 0.59.3 (S_ELECTION)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: WARN: status_from_rc: Action 5 (icms:0_monitor_0) on mgraid-s000030311-0 failed (target: 7 vs. rc: 0): Error
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=icms:0_monitor_0, magic=0:0;5:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.59.3) : Event failed
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action icms:0_monitor_0 (5) confirmed on mgraid-s000030311-0 (rc=4)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.59.3 -> 0.59.4 (S_ELECTION)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: WARN: status_from_rc: Action 6 (omserver:0_monitor_0) on mgraid-s000030311-0 failed (target: 7 vs. rc: 0): Error
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=omserver:0_monitor_0, magic=0:0;6:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.59.4) : Event failed
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action omserver:0_monitor_0 (6) confirmed on mgraid-s000030311-0 (rc=4)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 28 (current: 28, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=19, Pending=4, Fired=0, Skipped=53, Incomplete=34, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.59.4 -> 0.59.5 (S_ELECTION)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSS000030311:0_monitor_0 (7) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=20, Pending=3, Fired=0, Skipped=53, Incomplete=34, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.59.5 -> 0.59.6 (S_ELECTION)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030316:0_monitor_0 (9) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=21, Pending=2, Fired=0, Skipped=53, Incomplete=34, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.59.6 -> 0.59.7 (S_ELECTION)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030313:0_monitor_0 (8) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=22, Pending=1, Fired=0, Skipped=53, Incomplete=34, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.59.7 -> 0.59.8 (S_ELECTION)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030312:0_monitor_0 (10) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on mgraid-s000030311-0 - no waiting
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 2 (Complete=23, Pending=0, Fired=1, Skipped=53, Incomplete=33, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: run_graph: ====================================================
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: notice: run_graph: Transition 2 (Complete=24, Pending=0, Fired=0, Skipped=53, Incomplete=33, Source=/var/lib/pengine/pe-input-5.bz2): Stopped
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: te_graph_trigger: Transition 2 is now complete
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: notify_crmd: Processing transition completion in state S_ELECTION
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: notify_crmd: Transition 2 status: restart - Non-status change
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/119, version=0.59.8): ok (rc=0)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Replaced: 0.59.8 -> 0.62.1 from <null>
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="59" num_updates="8" admin_epoch="0" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   <configuration >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     <resources >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       <primitive class="ocf" id="SSJ000030314" provider="omneon" type="ss" __crm_diff_marker__="removed:top" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <instance_attributes id="SSJ000030314-instance_attributes" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSJ000030314-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030314" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSJ000030314-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030314" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </instance_attributes>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <operations >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030314-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030314-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030314-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030314-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </operations>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       </primitive>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     </resources>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   </configuration>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - </cib>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="62" num_updates="1" admin_epoch="0" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <master id="ms-SSJ000030314" __crm_diff_marker__="added:top" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <meta_attributes id="ms-SSJ000030314-meta_attributes" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030314-meta_attributes-clone-max" name="clone-max" value="2" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030314-meta_attributes-notify" name="notify" value="true" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030314-meta_attributes-globally-unique" name="globally-unique" value="false" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030314-meta_attributes-target-role" name="target-role" value="Started" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </meta_attributes>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <primitive class="ocf" id="SSJ000030314" provider="omneon" type="ss" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <instance_attributes id="SSJ000030314-instance_attributes" >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSJ000030314-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030314" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSJ000030314-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030314" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </instance_attributes>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <operations >
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030314-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030314-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030314-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030314-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </operations>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </primitive>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </master>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.62.1): ok (rc=0)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 185 for probe_complete=true passed
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:31 mgraid-S000030311-1 cib: [18554]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-27.raw
Apr 14 20:35:31 mgraid-S000030311-1 cib: [18554]: info: write_cib_contents: Wrote version 0.62.0 of the CIB to disk (digest: 31e4a474c45d690d5b5631b7d7f12b83)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [18554]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.VdfmuN (digest: /var/lib/heartbeat/crm/cib.J2mIoq)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 18554 exited with return code 0.
Apr 14 20:35:31 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 29
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=132
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 29 (current: 29, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:31 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:31 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/121, version=0.62.1): ok (rc=0)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.64.1 from <null>
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="62" admin_epoch="0" num_updates="1" />
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="64" admin_epoch="0" num_updates="1" >
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_order first="cloneIcms" id="orderms-SSJ000030314" score="0" then="ms-SSJ000030314" __crm_diff_marker__="added:top" />
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.64.1): ok (rc=0)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 199 for probe_complete=true passed
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:32 mgraid-S000030311-1 cib: [18583]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-28.raw
Apr 14 20:35:32 mgraid-S000030311-1 cib: [18583]: info: write_cib_contents: Wrote version 0.64.0 of the CIB to disk (digest: 398f8544608cb25fbc93a35d947bb971)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [18583]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.7SoAy7 (digest: /var/lib/heartbeat/crm/cib.ci8PAL)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 18583 exited with return code 0.
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 30
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=132
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 30 (current: 30, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/123, version=0.64.1): ok (rc=0)
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:32 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:32 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 30 (current: 30, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: The transitioner is already active
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:35:32 mgraid-S000030311-1 crmd: [16635]: WARN: start_subsystem: Client pengine already running as pid 16986
Apr 14 20:35:33 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18603) removed from ccm
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.66.1 from mgraid-s000030311-0
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="64" admin_epoch="0" num_updates="1" />
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="66" admin_epoch="0" num_updates="1" >
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_location id="ms-SSJ000030316-master-w1" rsc="ms-SSJ000030316" __crm_diff_marker__="added:top" >
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <rule id="ms-SSJ000030316-master-w1-rule" role="master" score="100" >
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <expression attribute="#uname" id="ms-SSJ000030316-master-w1-expression" operation="eq" value="mgraid-s000030311-0" />
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </rule>
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </rsc_location>
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=mgraid-s000030311-0/crm_shadow/2, version=0.66.1): ok (rc=0)
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:33 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:33 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:34 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 213 for probe_complete=true passed
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:34 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:34 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:34 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:34 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:34 mgraid-S000030311-1 cib: [18611]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-29.raw
Apr 14 20:35:34 mgraid-S000030311-1 cib: [18611]: info: write_cib_contents: Wrote version 0.66.0 of the CIB to disk (digest: 4798821c04663718825fc65e1f838490)
Apr 14 20:35:34 mgraid-S000030311-1 cib: [18611]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.bQC3uU (digest: /var/lib/heartbeat/crm/cib.WqJU2C)
Apr 14 20:35:34 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 18611 exited with return code 0.
Apr 14 20:35:34 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18610) removed from ccm
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:34 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:34 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:34 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:34 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:35 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18618) removed from ccm
Apr 14 20:35:35 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:35:35 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:35 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:35:35 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=140
Apr 14 20:35:35 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:35 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:35:35 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:35:35 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/O mode
Apr 14 20:35:35 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/crmd/125, version=0.66.1): ok (rc=0)
Apr 14 20:35:35 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:35:35 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/126, version=0.66.1): ok (rc=0)
Apr 14 20:35:35 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/127, version=0.66.1): ok (rc=0)
Apr 14 20:35:35 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr 14 20:35:35 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:35:36 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/129, version=0.66.1): ok (rc=0)
Apr 14 20:35:36 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" />
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: initialize_join: join-5: Initializing join data (flag=true)
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-5: Sending offer to mgraid-s000030311-0
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-5: Sending offer to mgraid-s000030311-1
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_offer_all: join-5: Waiting on 2 outstanding join acks
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Removed input: 0000000000020000 (R_HAVE_CIB)
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:36 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/131, version=0.66.1): ok (rc=0)
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:36 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18625) removed from ccm
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=0
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=0
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 132 : Parsing CIB options
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 31
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=145
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_OFFER: join-5
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_ELECTION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_WARN  
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: WARN: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 32
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=145
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_ELECTION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_WARN  
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: WARN: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 32 (current: 32, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:36 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:36 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/133, version=0.66.1): ok (rc=0)
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 32 (current: 32, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: The transitioner is already active
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:35:37 mgraid-S000030311-1 crmd: [16635]: WARN: start_subsystem: Client pengine already running as pid 16986
Apr 14 20:35:37 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18632) removed from ccm
Apr 14 20:35:38 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18634) removed from ccm
Apr 14 20:35:39 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18647) removed from ccm
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=147
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:35:40 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/O mode
Apr 14 20:35:40 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/crmd/135, version=0.66.2): ok (rc=0)
Apr 14 20:35:40 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:35:40 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/136, version=0.66.2): ok (rc=0)
Apr 14 20:35:40 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/137, version=0.66.2): ok (rc=0)
Apr 14 20:35:40 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:35:40 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/139, version=0.66.2): ok (rc=0)
Apr 14 20:35:40 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" />
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: initialize_join: join-6: Initializing join data (flag=true)
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-6: Sending offer to mgraid-s000030311-0
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-6: Sending offer to mgraid-s000030311-1
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_offer_all: join-6: Waiting on 2 outstanding join acks
Apr 14 20:35:40 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/141, version=0.66.2): ok (rc=0)
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 142 : Parsing CIB options
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:35:40 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18656) removed from ccm
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_OFFER: join-6
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: info: update_dc: Set DC to mgraid-s000030311-1 (3.0.1)
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: join_query_callback: Respond to join offer join-6
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: join_query_callback: Acknowledging mgraid-s000030311-1 as our DC
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: Processing req from mgraid-s000030311-1
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-6: Welcoming node mgraid-s000030311-1 (ref join_request-crmd-1302838540-75)
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-6
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr 14 20:35:40 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-6: Still waiting on 1 outstanding offers
Apr 14 20:35:41 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18665) removed from ccm
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: Processing req from mgraid-s000030311-0
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-6: Welcoming node mgraid-s000030311-0 (ref join_request-crmd-1302838541-41)
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-6
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-6: Integration of 2 peers complete: do_dc_join_filter_offer
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=151
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_finalize: Finializing join-6 for 2 clients
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_finalize: join-6: Syncing the CIB from mgraid-s000030311-1 to the rest of the cluster
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000020000 (R_HAVE_CIB)
Apr 14 20:35:41 mgraid-S000030311-1 cib: [16631]: debug: sync_our_cib: Syncing CIB to all peers
Apr 14 20:35:41 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/144, version=0.66.2): ok (rc=0)
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-6: Still waiting on 2 integrated nodes
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: finalize_sync_callback: Notifying 2 clients of join-6 results
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: finalize_join_for: join-6: ACK'ing join request from mgraid-s000030311-0, state member
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: finalize_join_for: join-6: ACK'ing join request from mgraid-s000030311-1, state member
Apr 14 20:35:41 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/145, version=0.66.2): ok (rc=0)
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_RESULT: join-6
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_finalize_respond: Confirming join join-6: join_ack_nack
Apr 14 20:35:41 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc omserver:1 is LRM_RSC_IDLE
Apr 14 20:35:41 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc SSJ000030313:1 is LRM_RSC_IDLE
Apr 14 20:35:41 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc SSS000030311:1 is LRM_RSC_IDLE
Apr 14 20:35:41 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc SSJ000030312:1 is LRM_RSC_IDLE
Apr 14 20:35:41 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc icms:1 is LRM_RSC_IDLE
Apr 14 20:35:41 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc mgraid-stonith:1 is LRM_RSC_IDLE
Apr 14 20:35:41 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc SSJ000030316:1 is LRM_RSC_IDLE
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_finalize_respond: join-6: Join complete.  Sending local LRM status to mgraid-s000030311-1
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:41 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from mgraid-s000030311-1
Apr 14 20:35:41 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/146, version=0.66.2): ok (rc=0)
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for master-SSJ000030316:0
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_ack: join-6: Updating node state to member for mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for master-SSJ000030313:0
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: join-6: Registered callback for LRM update 148
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='mgraid-s000030311-0']/lrm (/cib/status/node_state[1]/lrm)
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='mgraid-s000030311-0']/lrm (origin=local/crmd/147, version=0.66.3): ok (rc=0)
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: erase_xpath_callback: Deletion of "//node_state[@uname='mgraid-s000030311-0']/lrm": ok (rc=0)
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: join_update_complete_callback: Join update 148 complete
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-6: Still waiting on 1 finalized nodes
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_ack: join-6: Updating node state to member for mgraid-s000030311-1
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: join-6: Registered callback for LRM update 150
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='mgraid-s000030311-1']/lrm (/cib/status/node_state[2]/lrm)
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='mgraid-s000030311-1']/lrm (origin=local/crmd/149, version=0.66.5): ok (rc=0)
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: erase_xpath_callback: Deletion of "//node_state[@uname='mgraid-s000030311-1']/lrm": ok (rc=0)
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: join_update_complete_callback: Join update 150 complete
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-6 complete: join_update_complete_callback
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:42 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18674) removed from ccm
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for master-SSS000030311:0
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for master-SSJ000030312:0
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:42 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: attrd_update: Sent update: (null)=(null) for localhost
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: crm_update_quorum: Updating quorum status to true (call=153)
Apr 14 20:35:42 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:0 (<null>)
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: inactive
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=1) : Peer Cancelled
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Apr 14 20:35:42 mgraid-S000030311-1 crmd: [16635]: info: do_pe_invoke: Query 154: Requesting the current CIB: S_POLICY_ENGINE
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/151, version=0.66.6): ok (rc=0)
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/153, version=0.66.6): ok (rc=0)
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:0 (<null>)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_pe_invoke_callback: Invoking the PE: query=154, ref=pe_calc-dc-1302838543-79, seq=2, quorate=1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'stop' for cluster option 'no-quorum-policy'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '20s' for cluster option 'default-action-timeout'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: STONITH timeout: 60000
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: STONITH of failed nodes is enabled
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Stop all active resources: false
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: Default stickiness: 0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: determine_online_status: Node mgraid-s000030311-0 is online
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: unpack_rsc_op: icms:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: unpack_rsc_op: Operation icms:0_monitor_0 found resource icms:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: unpack_rsc_op: omserver:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: unpack_rsc_op: Operation omserver:0_monitor_0 found resource omserver:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: determine_online_status: Node mgraid-s000030311-1 is online
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: unpack_rsc_op: omserver:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: unpack_rsc_op: Operation omserver:1_monitor_0 found resource omserver:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: unpack_rsc_op: icms:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: unpack_rsc_op: Operation icms:1_monitor_0 found resource icms:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Clone Set: Fencing
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:0 (<null>)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ mgraid-stonith:0 mgraid-stonith:1 ]
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Clone Set: cloneIcms
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_active: Resource icms:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_active: Resource icms:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_active: Resource icms:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_active: Resource icms:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Started: [ mgraid-s000030311-0 mgraid-s000030311-1 ]
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Clone Set: cloneOmserver
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_active: Resource omserver:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_active: Resource omserver:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_active: Resource omserver:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_active: Resource omserver:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Started: [ mgraid-s000030311-0 mgraid-s000030311-1 ]
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Master/Slave Set: ms-SSS000030311
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ SSS000030311:0 SSS000030311:1 ]
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Master/Slave Set: ms-SSJ000030313
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ SSJ000030313:0 SSJ000030313:1 ]
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Master/Slave Set: ms-SSJ000030316
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ SSJ000030316:0 SSJ000030316:1 ]
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Master/Slave Set: ms-SSJ000030312
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ SSJ000030312:0 SSJ000030312:1 ]
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: clone_print:  Master/Slave Set: ms-SSJ000030314
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: short_print:      Stopped: [ SSJ000030314:0 SSJ000030314:1 ]
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSS000030311-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSS000030311-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSS000030311-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030313-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030313-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030313-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030312-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030312-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030312-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030316-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030316-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_rsc_location: Constraint (ms-SSJ000030316-master-w1-rule) is not active (role : Master)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: common_apply_stickiness: Resource icms:0: preferring current location (node=mgraid-s000030311-0, weight=1)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: common_apply_stickiness: Resource omserver:0: preferring current location (node=mgraid-s000030311-0, weight=1)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: common_apply_stickiness: Resource icms:1: preferring current location (node=mgraid-s000030311-1, weight=1)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: common_apply_stickiness: Resource omserver:1: preferring current location (node=mgraid-s000030311-1, weight=1)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to mgraid-stonith:0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to mgraid-stonith:1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 Fencing instances of a possible 2
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to icms:0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to icms:1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 cloneIcms instances of a possible 2
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to omserver:0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to omserver:1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 cloneOmserver instances of a possible 2
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to SSS000030311:0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to SSS000030311:1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 ms-SSS000030311 instances of a possible 2
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSS000030311:0 master score: 99
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: master_color: Promoting SSS000030311:0 (Stopped mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSS000030311:1 master score: -1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: master_color: ms-SSS000030311: Promoted 1 instances of a possible 1 to master
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to SSJ000030313:0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to SSJ000030313:1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 ms-SSJ000030313 instances of a possible 2
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030313:1 master score: 99
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: master_color: Promoting SSJ000030313:1 (Stopped mgraid-s000030311-1)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030313:0 master score: -1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: master_color: ms-SSJ000030313: Promoted 1 instances of a possible 1 to master
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to SSJ000030316:0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to SSJ000030316:1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 ms-SSJ000030316 instances of a possible 2
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030316:0 master score: 99
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: master_color: Promoting SSJ000030316:0 (Stopped mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030316:1 master score: -1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: master_color: ms-SSJ000030316: Promoted 1 instances of a possible 1 to master
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to SSJ000030312:0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to SSJ000030312:1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 ms-SSJ000030312 instances of a possible 2
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030312:0 master score: 99
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: master_color: Promoting SSJ000030312:0 (Stopped mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030312:1 master score: -1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: master_color: ms-SSJ000030312: Promoted 1 instances of a possible 1 to master
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-0 to SSJ000030314:0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_assign_node: Assigning mgraid-s000030311-1 to SSJ000030314:1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: clone_color: Allocated 2 ms-SSJ000030314 instances of a possible 2
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030314:0 master score: -1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_color: SSJ000030314:1 master score: -1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: master_color: ms-SSJ000030314: Promoted 0 instances of a possible 1 to master
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing SSJ000030314:0 on mgraid-s000030311-0 (Stopped)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: native_create_probe: Probing SSJ000030314:1 on mgraid-s000030311-1 (Stopped)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: Fencing has no active children
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (5s) for icms:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (5s) for icms:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (5s) for omserver:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (5s) for omserver:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_create_actions: Creating actions for ms-SSS000030311
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSS000030311:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSS000030311:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: ms-SSS000030311 has no active children
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSS000030311:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSS000030311:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_create_actions: Creating actions for ms-SSJ000030313
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030313:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSJ000030313:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: ms-SSJ000030313 has no active children
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030313:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSJ000030313:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_create_actions: Creating actions for ms-SSJ000030316
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSJ000030316:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030316:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: ms-SSJ000030316 has no active children
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSJ000030316:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030316:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_create_actions: Creating actions for ms-SSJ000030312
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSJ000030312:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030312:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: ms-SSJ000030312 has no active children
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (3s) for SSJ000030312:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030312:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: master_create_actions: Creating actions for ms-SSJ000030314
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030314:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030314:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: debug: child_stopping_constraints: ms-SSJ000030314 has no active children
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030314:0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: RecurringOp:  Start recurring monitor (10s) for SSJ000030314:1 on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:0 (<null>)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start mgraid-stonith:0	(mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start mgraid-stonith:1	(mgraid-s000030311-1)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Leave resource icms:0	(Started mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Leave resource icms:1	(Started mgraid-s000030311-1)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Leave resource omserver:0	(Started mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Leave resource omserver:1	(Started mgraid-s000030311-1)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSS000030311:0	(mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Promote SSS000030311:0	(Stopped -> Master mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSS000030311:1	(mgraid-s000030311-1)
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030313:0	(mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030313:1	(mgraid-s000030311-1)
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Promote SSJ000030313:1	(Stopped -> Master mgraid-s000030311-1)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030316:0	(mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Promote SSJ000030316:0	(Stopped -> Master mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030316:1	(mgraid-s000030311-1)
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030312:0	(mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Promote SSJ000030312:0	(Stopped -> Master mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030312:1	(mgraid-s000030311-1)
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030314:0	(mgraid-s000030311-0)
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: notice: LogActions: Start SSJ000030314:1	(mgraid-s000030311-1)
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 232 for probe_complete=true passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 244 for probe_complete=true passed
Apr 14 20:35:43 mgraid-S000030311-1 pengine: [16986]: info: process_pe_message: Transition 3: PEngine Input stored in: /var/lib/pengine/pe-input-6.bz2
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: unpack_graph: Unpacked transition 3: 118 actions in 118 synapses
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1302838543-79) derived from /var/lib/pengine/pe-input-6.bz2
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 9 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 15: monitor icms:0_monitor_5000 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 18: monitor icms:1_monitor_5000 on mgraid-s000030311-1 (local)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=18:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592 op=icms:1_monitor_5000 )
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation monitor[9] on lsb::S53icms::icms:1 for client 16635, its parameters: CRM_meta_clone=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[false] crm_feature_set=[3.0.1] CRM_meta_globally_unique=[false] CRM_meta_name=[monitor] CRM_meta_interval=[5000] CRM_meta_timeout=[7000]  to the operation list.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: rsc:icms:1:9: monitor
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 25: monitor omserver:0_monitor_5000 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 28: monitor omserver:1_monitor_5000 on mgraid-s000030311-1 (local)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=28:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592 op=omserver:1_monitor_5000 )
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation monitor[10] on lsb::S49omserver::omserver:1 for client 16635, its parameters: CRM_meta_clone=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[false] crm_feature_set=[3.0.1] CRM_meta_globally_unique=[false] CRM_meta_name=[monitor] CRM_meta_interval=[5000] CRM_meta_timeout=[7000]  to the operation list.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: rsc:omserver:1:10: monitor
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 40 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 41 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 69 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 70 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 98 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 99 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 127 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 128 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 4: monitor SSJ000030314:0_monitor_0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 6: monitor SSJ000030314:1_monitor_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_add_rsc:client [16635] adds resource SSJ000030314:1
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=6:3:7:469a0e5c-f535-4c85-84b2-fd971ee76592 op=SSJ000030314:1_monitor_0 )
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc SSJ000030314:1
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation monitor[11] on ocf::ss::SSJ000030314:1 for client 16635, its parameters: CRM_meta_clone=[1] ssconf=[/var/omneon/config/config.J000030314] CRM_meta_master_node_max=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_master_max=[1] CRM_meta_globally_unique=[false] crm_feature_set=[3.0.1] ss_resource=[SSJ000030314] CRM_meta_timeout=[20000]  to the operation list.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: rsc:SSJ000030314:1:11: monitor
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 155 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 156 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=0, Pending=6, Fired=17, Skipped=0, Incomplete=101, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 7: start mgraid-stonith:0_start_0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 8: start mgraid-stonith:1_start_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=8:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592 op=mgraid-stonith:1_start_0 )
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc mgraid-stonith:1
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation start[12] on stonith::external/mgpstonith::mgraid-stonith:1 for client 16635, its parameters: CRM_meta_clone=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[false] crm_feature_set=[3.0.1] CRM_meta_globally_unique=[false] hostlist=[mgraid-canister] CRM_meta_timeout=[20000]  to the operation list.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: rsc:mgraid-stonith:1:12: start
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 38 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [18690]: debug: stonithd_signon: creating connection
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 67 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 96 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [18690]: debug: sending out the signon msg.
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 125 fired and confirmed
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: client STONITH_RA_EXEC_18690 (pid=18690) succeeded to signon to stonithd.
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=11, Pending=8, Fired=6, Skipped=0, Incomplete=95, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: Managed icms:1:monitor process 18685 exited with return code 0.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [18690]: debug: signed on to stonithd.
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 33: start SSS000030311:0_start_0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 36: start SSS000030311:1_start_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [18690]: info: Try to start STONITH resource <rsc_id=mgraid-stonith:1> : Device=external/mgpstonith
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: client STONITH_RA_EXEC_18690 [pid: 18690] requests a resource operation start on mgraid-stonith:1 (external/mgpstonith)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=36:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592 op=SSS000030311:1_start_0 )
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: stonithRA_start: got a shmem seg of size 8192, shmid: 294914
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc SSS000030311:1
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation start[13] on ocf::ss::SSS000030311:1 for client 16635, its parameters: CRM_meta_clone=[1] CRM_meta_notify_stop_uname=[ ] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] ssconf=[/var/omneon/config/config.S000030311] CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_start_resource=[SSS000030311:0 SSS0000303 to the operation list.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: rsc:SSS000030311:1:13: start
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: external_set_config: called.
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: external_get_confignames: called.
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: external_run_cmd: Calling '/lib64/stonith/plugins/external/mgpstonith getconfignames'
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: Managed omserver:1:monitor process 18686 exited with return code 0.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [18690]: debug: waiting for the stonithRA reply msg.
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 62: start SSJ000030313:0_start_0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 64: start SSJ000030313:1_start_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=64:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592 op=SSJ000030313:1_start_0 )
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc SSJ000030313:1
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation start[14] on ocf::ss::SSJ000030313:1 for client 16635, its parameters: CRM_meta_clone=[1] CRM_meta_notify_stop_uname=[ ] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] ssconf=[/var/omneon/config/config.J000030313] CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_start_resource=[SSJ000030313:0 SSJ0000303 to the operation list.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: rsc:SSJ000030313:1:14: start
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 91: start SSJ000030316:0_start_0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 94: start SSJ000030316:1_start_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=94:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592 op=SSJ000030316:1_start_0 )
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc SSJ000030316:1
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation start[15] on ocf::ss::SSJ000030316:1 for client 16635, its parameters: CRM_meta_clone=[1] CRM_meta_notify_stop_uname=[ ] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] ssconf=[/var/omneon/config/config.J000030316] CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_start_resource=[SSJ000030316:0 SSJ0000303 to the operation list.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: rsc:SSJ000030316:1:15: start
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 120: start SSJ000030312:0_start_0 on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 123: start SSJ000030312:1_start_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=123:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592 op=SSJ000030312:1_start_0 )
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc SSJ000030312:1
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation start[16] on ocf::ss::SSJ000030312:1 for client 16635, its parameters: CRM_meta_clone=[1] CRM_meta_notify_stop_uname=[ ] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] ssconf=[/var/omneon/config/config.J000030312] CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_start_resource=[SSJ000030312:0 SSJ0000303 to the operation list.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: rsc:SSJ000030312:1:16: start
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=15, Pending=16, Fired=8, Skipped=0, Incomplete=87, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation icms:1_monitor_5000 (call=9, rc=0, cib-update=155, confirmed=false) ok
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation omserver:1_monitor_5000 (call=10, rc=0, cib-update=156, confirmed=false) ok
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.66.6 -> 0.66.7 (S_TRANSITION_ENGINE)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action icms:1_monitor_5000 (18) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.66.7 -> 0.66.8 (S_TRANSITION_ENGINE)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action omserver:1_monitor_5000 (28) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=17, Pending=14, Fired=0, Skipped=0, Incomplete=87, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: external_run_cmd: '/lib64/stonith/plugins/external/mgpstonith getconfignames' output: hostlist

Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: external_get_confignames: 'mgpstonith getconfignames' returned 0
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: plugin output: hostlist

Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: external_get_confignames: mgpstonith configname hostlist
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [18690]: debug: a stonith RA operation queue to run, call_id=18777.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [18690]: debug: stonithd_receive_ops_result: begin
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [18777]: debug: external_status: called.
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [18777]: debug: external_run_cmd: Calling '/lib64/stonith/plugins/external/mgpstonith status'
ss[18689]:	2011/04/14_20:35:43 DEBUG: ss_status() START SSJ000030314
ss[18689]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() START SSJ000030314
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: info: Invoked: crm_resource --meta -r ms-SSS000030311 -g STOPBOTH 
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: info: Invoked: crm_resource --meta -r ms-SSJ000030312 -g STOPBOTH 
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
ss[18689]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() ssadm return is 1
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: info: Invoked: crm_resource --meta -r ms-SSJ000030316 -g STOPBOTH 
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: info: Invoked: crm_resource --meta -r ms-SSJ000030313 -g STOPBOTH 
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cib_native_signon_raw: Connection to CIB successful
ss[18689]:	2011/04/14_20:35:43 DEBUG: SSJ000030314: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030314 -i1
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'stop' for cluster option 'no-quorum-policy'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '20s' for cluster option 'default-action-timeout'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [18777]: debug: external_status: running 'mgpstonith status' returned 0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [18777]: debug: external_hostlist: called.
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [18777]: debug: external_run_cmd: Calling '/lib64/stonith/plugins/external/mgpstonith gethosts'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: unpack_config: STONITH timeout: 60000
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: unpack_config: STONITH of failed nodes is enabled
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: unpack_config: Stop all active resources: false
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: unpack_config: Default stickiness: 0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: info: determine_online_status: Node mgraid-s000030311-0 is online
ss[18689]:	2011/04/14_20:35:43 ERROR: SSJ000030314: Called /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030314 -i1
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: unpack_rsc_op: icms:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: notice: unpack_rsc_op: Operation icms:0_monitor_0 found resource icms:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: unpack_rsc_op: omserver:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: notice: unpack_rsc_op: Operation omserver:0_monitor_0 found resource omserver:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'stop' for cluster option 'no-quorum-policy'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: info: determine_online_status: Node mgraid-s000030311-1 is online
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'stop' for cluster option 'no-quorum-policy'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: unpack_rsc_op: omserver:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: notice: unpack_rsc_op: Operation omserver:1_monitor_0 found resource omserver:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: unpack_rsc_op: icms:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '20s' for cluster option 'default-action-timeout'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: notice: unpack_rsc_op: Operation icms:1_monitor_0 found resource icms:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: dump_resource_attr: Looking up STOPBOTH in ms-SSS000030311
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18813]: WARN: main: Error performing operation: The object/attribute does not exist

Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '20s' for cluster option 'default-action-timeout'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: unpack_config: STONITH timeout: 60000
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: unpack_config: STONITH of failed nodes is enabled
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: unpack_config: Stop all active resources: false
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: unpack_config: Default stickiness: 0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: unpack_config: STONITH timeout: 60000
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: unpack_config: STONITH of failed nodes is enabled
ss[18689]:	2011/04/14_20:35:43 ERROR: SSJ000030314: Exit code 1
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: unpack_config: Stop all active resources: false
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: info: determine_online_status: Node mgraid-s000030311-0 is online
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: unpack_config: Default stickiness: 0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: unpack_rsc_op: icms:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: notice: unpack_rsc_op: Operation icms:0_monitor_0 found resource icms:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: unpack_rsc_op: omserver:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: notice: unpack_rsc_op: Operation omserver:0_monitor_0 found resource omserver:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: info: determine_online_status: Node mgraid-s000030311-1 is online
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: unpack_rsc_op: omserver:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: notice: unpack_rsc_op: Operation omserver:1_monitor_0 found resource omserver:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: unpack_rsc_op: icms:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: notice: unpack_rsc_op: Operation icms:1_monitor_0 found resource icms:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: dump_resource_attr: Looking up STOPBOTH in ms-SSJ000030312
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: info: determine_online_status: Node mgraid-s000030311-0 is online
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: unpack_rsc_op: icms:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: notice: unpack_rsc_op: Operation icms:0_monitor_0 found resource icms:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18832]: WARN: main: Error performing operation: The object/attribute does not exist

Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: unpack_rsc_op: omserver:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'stop' for cluster option 'no-quorum-policy'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: notice: unpack_rsc_op: Operation omserver:0_monitor_0 found resource omserver:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: info: determine_online_status: Node mgraid-s000030311-1 is online
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: unpack_rsc_op: omserver:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: notice: unpack_rsc_op: Operation omserver:1_monitor_0 found resource omserver:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: unpack_rsc_op: icms:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: notice: unpack_rsc_op: Operation icms:1_monitor_0 found resource icms:1 active on mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
ss[18693]:	2011/04/14_20:35:43 INFO: ss_start() START
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: dump_resource_attr: Looking up STOPBOTH in ms-SSJ000030316
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '20s' for cluster option 'default-action-timeout'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18833]: WARN: main: Error performing operation: The object/attribute does not exist

Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
ss[18689]:	2011/04/14_20:35:43 ERROR: SSJ000030314: Command output: 
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: RA output: (SSJ000030314:1:monitor:stdout) 

Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: unpack_config: STONITH timeout: 60000
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: unpack_config: STONITH of failed nodes is enabled
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: unpack_config: Stop all active resources: false
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: unpack_config: Default stickiness: 0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: info: determine_online_status: Node mgraid-s000030311-0 is online
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: unpack_rsc_op: icms:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: notice: unpack_rsc_op: Operation icms:0_monitor_0 found resource icms:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: unpack_rsc_op: omserver:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: notice: unpack_rsc_op: Operation omserver:0_monitor_0 found resource omserver:0 active on mgraid-s000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: info: determine_online_status: Node mgraid-s000030311-1 is online
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: unpack_rsc_op: omserver:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: notice: unpack_rsc_op: Operation omserver:1_monitor_0 found resource omserver:1 active on mgraid-s000030311-1
ss[18703]:	2011/04/14_20:35:43 INFO: ss_start() START
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: unpack_rsc_op: icms:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: notice: unpack_rsc_op: Operation icms:1_monitor_0 found resource icms:1 active on mgraid-s000030311-1
ss[18693]:	2011/04/14_20:35:43 DEBUG: ss_status() START SSS000030311
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: dump_resource_attr: Looking up STOPBOTH in ms-SSJ000030313
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:35:43 mgraid-S000030311-1 crm_resource: [18834]: WARN: main: Error performing operation: The object/attribute does not exist

ss[18689]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() END SSJ000030314   - Unconfigured
ss[18700]:	2011/04/14_20:35:43 INFO: ss_start() START
ss[18689]:	2011/04/14_20:35:43 DEBUG: ss_status() returning 7
ss[18703]:	2011/04/14_20:35:43 DEBUG: ss_status() START SSJ000030312
ss[18693]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() START SSS000030311
ss[18700]:	2011/04/14_20:35:43 DEBUG: ss_status() START SSJ000030316
ss[18689]:	2011/04/14_20:35:43 DEBUG: SSJ000030314: Calling //sbin/crm_master -l reboot -D
ss[18695]:	2011/04/14_20:35:43 INFO: ss_start() START
ss[18703]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() START SSJ000030312
ss[18693]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() ssadm return is 1
ss[18700]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() START SSJ000030316
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [18777]: debug: external_run_cmd: '/lib64/stonith/plugins/external/mgpstonith gethosts' output: mgraid-S000030311-0

Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [18777]: debug: external_hostlist: running 'mgpstonith gethosts' returned 0
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [18777]: debug: external_hostlist: mgpstonith host mgraid-S000030311-0
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [18777]: debug: mgraid-stonith:1 claims it can manage node mgraid-S000030311-0
ss[18693]:	2011/04/14_20:35:43 DEBUG: SSS000030311: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.S000030311 -i1
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: Child process external_mgraid-stonith:1_start [18777] exited, its exit code: 0 when signo=0.
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: mgraid-stonith:1's (external/mgpstonith) op start finished. op_result=0
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: info: mgraid-stonith:1 stonith resource started
ss[18695]:	2011/04/14_20:35:43 DEBUG: ss_status() START SSJ000030313
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: info: Invoked: crm_attribute -N mgraid-S000030311-1 -n master-SSJ000030314:1 -l reboot -D 
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
ss[18703]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() ssadm return is 1
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: debug: log_data_element: query_node_uuid: Result section <nodes >
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: debug: log_data_element: query_node_uuid: Result section   <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" />
ss[18700]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() ssadm return is 1
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: debug: log_data_element: query_node_uuid: Result section   <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" />
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: debug: log_data_element: query_node_uuid: Result section </nodes>
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: Managed mgraid-stonith:1:start process 18690 exited with return code 0.
Apr 14 20:35:43 mgraid-S000030311-1 stonithd: [16633]: debug: client STONITH_RA_EXEC_18690 (pid=18690) signed off
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: info: determine_host: Mapped mgraid-S000030311-1 to 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
ss[18695]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() START SSJ000030313
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: stonithRA plugin: provider attribute is not needed and will be ignored.
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crm_attribute: master-SSJ000030314:1=<null>
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for master-SSJ000030314:1
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
ss[18703]:	2011/04/14_20:35:43 DEBUG: SSJ000030312: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030312 -i1
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: debug: attrd_update: Sent update: master-SSJ000030314:1=(null) for mgraid-S000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: info: main: Update master-SSJ000030314:1=<none> sent via attrd
Apr 14 20:35:43 mgraid-S000030311-1 crm_attribute: [18923]: debug: cib_native_signoff: Signing out of the CIB Service
ss[18700]:	2011/04/14_20:35:43 DEBUG: SSJ000030316: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030316 -i1
ss[18689]:	2011/04/14_20:35:43 DEBUG: SSJ000030314: Exit code 0
ss[18695]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() ssadm return is 1
ss[18689]:	2011/04/14_20:35:43 DEBUG: SSJ000030314: Command output: 
ss[18695]:	2011/04/14_20:35:43 DEBUG: SSJ000030313: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030313 -i1
ss[18689]:	2011/04/14_20:35:43 DEBUG: ss_monitor() returning 7
ss[18693]:	2011/04/14_20:35:43 ERROR: SSS000030311: Called /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.S000030311 -i1
ss[18703]:	2011/04/14_20:35:43 ERROR: SSJ000030312: Called /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030312 -i1
ss[18693]:	2011/04/14_20:35:43 ERROR: SSS000030311: Exit code 1
ss[18700]:	2011/04/14_20:35:43 ERROR: SSJ000030316: Called /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030316 -i1
ss[18703]:	2011/04/14_20:35:43 ERROR: SSJ000030312: Exit code 1
ss[18693]:	2011/04/14_20:35:43 ERROR: SSS000030311: Command output: 
ss[18700]:	2011/04/14_20:35:43 ERROR: SSJ000030316: Exit code 1
ss[18695]:	2011/04/14_20:35:43 ERROR: SSJ000030313: Called /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030313 -i1
ss[18693]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() END SSS000030311   - Unconfigured
ss[18703]:	2011/04/14_20:35:43 ERROR: SSJ000030312: Command output: 
ss[18700]:	2011/04/14_20:35:43 ERROR: SSJ000030316: Command output: 
ss[18693]:	2011/04/14_20:35:43 DEBUG: ss_status() returning 7
ss[18703]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() END SSJ000030312   - Unconfigured
ss[18695]:	2011/04/14_20:35:43 ERROR: SSJ000030313: Exit code 1
ss[18700]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() END SSJ000030316   - Unconfigured
ss[18693]:	2011/04/14_20:35:43 DEBUG: SSS000030311: Calling /usr/bin/pkill -9 -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.S000030311 -i1
ss[18703]:	2011/04/14_20:35:43 DEBUG: ss_status() returning 7
ss[18695]:	2011/04/14_20:35:43 ERROR: SSJ000030313: Command output: 
ss[18700]:	2011/04/14_20:35:43 DEBUG: ss_status() returning 7
ss[18703]:	2011/04/14_20:35:43 DEBUG: SSJ000030312: Calling /usr/bin/pkill -9 -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030312 -i1
ss[18695]:	2011/04/14_20:35:43 DEBUG: ss_set_status_variables() END SSJ000030313   - Unconfigured
ss[18700]:	2011/04/14_20:35:43 DEBUG: SSJ000030316: Calling /usr/bin/pkill -9 -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030316 -i1
ss[18695]:	2011/04/14_20:35:43 DEBUG: ss_status() returning 7
ss[18695]:	2011/04/14_20:35:43 DEBUG: SSJ000030313: Calling /usr/bin/pkill -9 -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030313 -i1
ss[18693]:	2011/04/14_20:35:43 ERROR: SSS000030311: Called /usr/bin/pkill -9 -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.S000030311 -i1
ss[18693]:	2011/04/14_20:35:43 ERROR: SSS000030311: Exit code 1
ss[18700]:	2011/04/14_20:35:43 ERROR: SSJ000030316: Called /usr/bin/pkill -9 -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030316 -i1
ss[18693]:	2011/04/14_20:35:43 ERROR: SSS000030311: Command output: 
ss[18703]:	2011/04/14_20:35:43 ERROR: SSJ000030312: Called /usr/bin/pkill -9 -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030312 -i1
ss[18700]:	2011/04/14_20:35:43 ERROR: SSJ000030316: Exit code 1
ss[18695]:	2011/04/14_20:35:43 ERROR: SSJ000030313: Called /usr/bin/pkill -9 -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030313 -i1
ss[18693]:	2011/04/14_20:35:43 DEBUG: SSS000030311: Calling /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.S000030311 -i1
ss[18703]:	2011/04/14_20:35:43 ERROR: SSJ000030312: Exit code 1
ss[18695]:	2011/04/14_20:35:43 ERROR: SSJ000030313: Exit code 1
ss[18700]:	2011/04/14_20:35:43 ERROR: SSJ000030316: Command output: 
ss[18700]:	2011/04/14_20:35:43 DEBUG: SSJ000030316: Calling /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030316 -i1
ss[18703]:	2011/04/14_20:35:43 ERROR: SSJ000030312: Command output: 
ss[18695]:	2011/04/14_20:35:43 ERROR: SSJ000030313: Command output: 
ss[18693]:	2011/04/14_20:35:43 DEBUG: SSS000030311: Exit code 0
ss[18703]:	2011/04/14_20:35:43 DEBUG: SSJ000030312: Calling /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030312 -i1
ss[18693]:	2011/04/14_20:35:43 DEBUG: SSS000030311: Command output: 
ss[18700]:	2011/04/14_20:35:43 DEBUG: SSJ000030316: Exit code 0
ss[18695]:	2011/04/14_20:35:43 DEBUG: SSJ000030313: Calling /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030313 -i1
ss[18700]:	2011/04/14_20:35:43 DEBUG: SSJ000030316: Command output: 
ss[18703]:	2011/04/14_20:35:43 DEBUG: SSJ000030312: Exit code 0
ss[18703]:	2011/04/14_20:35:43 DEBUG: SSJ000030312: Command output: 
ss[18695]:	2011/04/14_20:35:43 DEBUG: SSJ000030313: Exit code 0
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: WARN: Managed SSJ000030314:1:monitor process 18689 exited with return code 7.
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: debug: RA output: (SSJ000030314:1:monitor:stdout) 

Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSS000030311:1:start:stdout) 



Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030313:1:start:stdout) 


Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030316:1:start:stdout) 



Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030312:1:start:stdout) 



ss[18695]:	2011/04/14_20:35:43 DEBUG: SSJ000030313: Command output: 
Apr 14 20:35:43 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030313:1:start:stdout) 

Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation mgraid-stonith:1_start_0 (call=12, rc=0, cib-update=157, confirmed=true) ok
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation SSJ000030314:1_monitor_0 (call=11, rc=7, cib-update=158, confirmed=true) not running
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.66.8 -> 0.66.9 (S_TRANSITION_ENGINE)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action mgraid-stonith:1_start_0 (8) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.66.9 -> 0.66.10 (S_TRANSITION_ENGINE)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030314:1_monitor_0 (6) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 5: probe_complete probe_complete on mgraid-s000030311-1 (local) - no waiting
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: attrd_update: Sent update: probe_complete=true for localhost
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=19, Pending=12, Fired=1, Skipped=0, Incomplete=86, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Apr 14 20:35:43 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=20, Pending=12, Fired=0, Skipped=0, Incomplete=86, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:43 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:43 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:43 mgraid-S000030311-1 ccm: [16630]: info: client (pid=18683) removed from ccm
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.66.10 -> 0.66.11 (S_TRANSITION_ENGINE)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action omserver:0_monitor_5000 (25) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.66.11 -> 0.66.12 (S_TRANSITION_ENGINE)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action icms:0_monitor_5000 (15) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.66.12 -> 0.66.13 (S_TRANSITION_ENGINE)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action mgraid-stonith:0_start_0 (7) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.66.13 -> 0.66.14 (S_TRANSITION_ENGINE)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030314:0_monitor_0 (4) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 10 fired and confirmed
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on mgraid-s000030311-0 - no waiting
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=24, Pending=8, Fired=2, Skipped=0, Incomplete=84, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=26, Pending=8, Fired=1, Skipped=0, Incomplete=83, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 159 fired and confirmed
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=27, Pending=8, Fired=1, Skipped=0, Incomplete=82, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: te_pseudo_action: Pseudo action 153 fired and confirmed
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=28, Pending=8, Fired=1, Skipped=0, Incomplete=81, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 149: start SSJ000030314:0_start_0 on mgraid-s000030311-0
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: te_rsc_command: Initiating action 151: start SSJ000030314:1_start_0 on mgraid-s000030311-1 (local)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: do_lrm_rsc_op: Performing key=151:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592 op=SSJ000030314:1_start_0 )
Apr 14 20:35:44 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op:2359: copying parameters for rsc SSJ000030314:1
Apr 14 20:35:44 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_perform_op: add an operation operation start[17] on ocf::ss::SSJ000030314:1 for client 16635, its parameters: CRM_meta_clone=[1] CRM_meta_notify_stop_uname=[ ] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] ssconf=[/var/omneon/config/config.J000030314] CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_start_resource=[SSJ000030314:0 SSJ0000303 to the operation list.
Apr 14 20:35:44 mgraid-S000030311-1 lrmd: [16632]: info: rsc:SSJ000030314:1:17: start
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=2, Skipped=0, Incomplete=79, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: info: Invoked: crm_resource --meta -r ms-SSJ000030314 -g STOPBOTH 
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'stop' for cluster option 'no-quorum-policy'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '20s' for cluster option 'default-action-timeout'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: unpack_config: STONITH timeout: 60000
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: unpack_config: STONITH of failed nodes is enabled
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: unpack_config: Stop all active resources: false
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: unpack_config: Default stickiness: 0
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: info: determine_online_status: Node mgraid-s000030311-0 is online
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: unpack_rsc_op: icms:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: notice: unpack_rsc_op: Operation icms:0_monitor_0 found resource icms:0 active on mgraid-s000030311-0
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: unpack_rsc_op: omserver:0_monitor_0 on mgraid-s000030311-0 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: notice: unpack_rsc_op: Operation omserver:0_monitor_0 found resource omserver:0 active on mgraid-s000030311-0
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: info: determine_online_status: Node mgraid-s000030311-1 is online
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: unpack_rsc_op: omserver:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: notice: unpack_rsc_op: Operation omserver:1_monitor_0 found resource omserver:1 active on mgraid-s000030311-1
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: unpack_rsc_op: icms:1_monitor_0 on mgraid-s000030311-1 returned 0 (ok) instead of the expected value: 7 (not running)
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: notice: unpack_rsc_op: Operation icms:1_monitor_0 found resource icms:1 active on mgraid-s000030311-1
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: dump_resource_attr: Looking up STOPBOTH in ms-SSJ000030314
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:35:44 mgraid-S000030311-1 crm_resource: [19223]: WARN: main: Error performing operation: The object/attribute does not exist

ss[19197]:	2011/04/14_20:35:44 INFO: ss_start() START
ss[19197]:	2011/04/14_20:35:44 DEBUG: ss_status() START SSJ000030314
ss[19197]:	2011/04/14_20:35:44 DEBUG: ss_set_status_variables() START SSJ000030314
ss[19197]:	2011/04/14_20:35:44 DEBUG: ss_set_status_variables() ssadm return is 1
ss[19197]:	2011/04/14_20:35:44 DEBUG: SSJ000030314: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030314 -i1
ss[19197]:	2011/04/14_20:35:44 ERROR: SSJ000030314: Called /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030314 -i1
ss[19197]:	2011/04/14_20:35:44 ERROR: SSJ000030314: Exit code 1
ss[19197]:	2011/04/14_20:35:44 ERROR: SSJ000030314: Command output: 
Apr 14 20:35:44 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030314:1:start:stdout) 

ss[19197]:	2011/04/14_20:35:44 DEBUG: ss_set_status_variables() END SSJ000030314   - Unconfigured
ss[19197]:	2011/04/14_20:35:44 DEBUG: ss_status() returning 7
ss[19197]:	2011/04/14_20:35:44 DEBUG: SSJ000030314: Calling /usr/bin/pkill -9 -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030314 -i1
ss[19197]:	2011/04/14_20:35:44 ERROR: SSJ000030314: Called /usr/bin/pkill -9 -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030314 -i1
ss[19197]:	2011/04/14_20:35:44 ERROR: SSJ000030314: Exit code 1
ss[19197]:	2011/04/14_20:35:44 ERROR: SSJ000030314: Command output: 
Apr 14 20:35:44 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030314:1:start:stdout) 

ss[19197]:	2011/04/14_20:35:44 DEBUG: SSJ000030314: Calling /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030314 -i1
ss[19197]:	2011/04/14_20:35:44 DEBUG: SSJ000030314: Exit code 0
ss[19197]:	2011/04/14_20:35:44 DEBUG: SSJ000030314: Command output: 
Apr 14 20:35:44 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030314:1:start:stdout) 

Apr 14 20:35:44 mgraid-S000030311-1 ccm: [16630]: info: client (pid=19192) removed from ccm
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Replaced: 0.66.14 -> 0.68.1 from <null>
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="66" num_updates="14" admin_epoch="0" />
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="68" num_updates="1" admin_epoch="0" >
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:0 (<null>)
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_location id="ms-SSJ000030314-master-w1" rsc="ms-SSJ000030314" __crm_diff_marker__="added:top" >
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <rule id="ms-SSJ000030314-master-w1-rule" role="master" score="100" >
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <expression attribute="#uname" id="ms-SSJ000030314-master-w1-expression" operation="eq" value="mgraid-s000030311-1" />
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </rule>
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </rsc_location>
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.68.1): ok (rc=0)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_replace): 0.66.14 -> 0.68.1 (S_TRANSITION_ENGINE)
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: update_abort_priority: Abort action done superceeded by restart
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: need_abort: Aborting on change to epoch
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:44 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:0 (<null>)
Apr 14 20:35:44 mgraid-S000030311-1 cib: [19331]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-30.raw
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:44 mgraid-S000030311-1 cib: [19331]: info: write_cib_contents: Wrote version 0.68.0 of the CIB to disk (digest: beb732f35ed2da1afde1c0be729be418)
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:44 mgraid-S000030311-1 cib: [19331]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.DuzG0v (digest: /var/lib/heartbeat/crm/cib.uB1PkH)
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:0 (<null>)
Apr 14 20:35:44 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:44 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:0 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:1 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 19331 exited with return code 0.
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 267 for probe_complete=true passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: info: update_dc: Unset DC mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 33
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=185
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 33 (current: 33, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/159, version=0.68.1): ok (rc=0)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [19437] registered
Apr 14 20:35:45 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:19437] disconnected.
Apr 14 20:35:45 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:19437] is unregistered
Apr 14 20:35:45 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_register:client lrmadmin [19438] registered
Apr 14 20:35:45 mgraid-S000030311-1 lrmd: [16632]: debug: on_receive_cmd: the IPC to client [pid:19438] disconnected.
Apr 14 20:35:45 mgraid-S000030311-1 lrmd: [16632]: debug: unregister_client: client lrmadmin [pid:19438] is unregistered
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.70.1 from <null>
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="68" admin_epoch="0" num_updates="1" />
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="70" admin_epoch="0" num_updates="1" >
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:0 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <primitive class="ocf" id="SSJ000030315" provider="omneon" type="ss" __crm_diff_marker__="added:top" >
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <instance_attributes id="SSJ000030315-instance_attributes" >
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSJ000030315-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030315" />
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="SSJ000030315-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030315" />
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </instance_attributes>
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <operations >
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030315-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030315-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030315-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <op id="SSJ000030315-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </operations>
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </primitive>
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.70.1): ok (rc=0)
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_replace): 0.68.1 -> 0.70.1 (S_ELECTION)
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: info: need_abort: Aborting on change to epoch
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:0 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:0 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [19458]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-31.raw
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:0 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:1 (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 cib: [19458]: info: write_cib_contents: Wrote version 0.70.0 of the CIB to disk (digest: 999c12ae4fd2903afa8acfd347f3f347)
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 cib: [19458]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.dLAUjo (digest: /var/lib/heartbeat/crm/cib.raEPTB)
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 291 for probe_complete=true passed
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 19458 exited with return code 0.
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:45 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:45 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:45 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.73.1 from <null>
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:0 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="70" admin_epoch="0" num_updates="1" >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   <configuration >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     <resources >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       <primitive class="ocf" id="SSJ000030315" provider="omneon" type="ss" __crm_diff_marker__="removed:top" >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <instance_attributes id="SSJ000030315-instance_attributes" >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSJ000030315-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030315" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <nvpair id="SSJ000030315-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030315" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </instance_attributes>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         <operations >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030315-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030315-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030315-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -           <op id="SSJ000030315-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -         </operations>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -       </primitive>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -     </resources>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: -   </configuration>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - </cib>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="73" admin_epoch="0" num_updates="1" >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <resources >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <master id="ms-SSJ000030315" __crm_diff_marker__="added:top" >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <meta_attributes id="ms-SSJ000030315-meta_attributes" >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030315-meta_attributes-clone-max" name="clone-max" value="2" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030315-meta_attributes-notify" name="notify" value="true" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030315-meta_attributes-globally-unique" name="globally-unique" value="false" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <nvpair id="ms-SSJ000030315-meta_attributes-target-role" name="target-role" value="Started" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </meta_attributes>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <primitive class="ocf" id="SSJ000030315" provider="omneon" type="ss" >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <instance_attributes id="SSJ000030315-instance_attributes" >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSJ000030315-instance_attributes-ss_resource" name="ss_resource" value="SSJ000030315" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <nvpair id="SSJ000030315-instance_attributes-ssconf" name="ssconf" value="/var/omneon/config/config.J000030315" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </instance_attributes>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <operations >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030315-monitor-3s" interval="3s" name="monitor" role="Master" timeout="7s" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030315-monitor-10s" interval="10s" name="monitor" role="Slave" timeout="7" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030315-stop-0" interval="0" name="stop" timeout="20" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +             <op id="SSJ000030315-start-0" interval="0" name="start" timeout="300" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           </operations>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </primitive>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </master>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </resources>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.73.1): ok (rc=0)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:0 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:0 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:0 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:1 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 315 for probe_complete=true passed
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:46 mgraid-S000030311-1 cib: [19492]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-32.raw
Apr 14 20:35:46 mgraid-S000030311-1 cib: [19492]: info: write_cib_contents: Wrote version 0.73.0 of the CIB to disk (digest: 3fc3abcd6db0c53b2aca2539bf38a0e7)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [19492]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.8cIOBV (digest: /var/lib/heartbeat/crm/cib.YwkWHa)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 19492 exited with return code 0.
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_replace): 0.70.1 -> 0.73.1 (S_ELECTION)
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: info: need_abort: Aborting on change to epoch
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/161, version=0.73.1): ok (rc=0)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.75.1 from <null>
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:0 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="73" admin_epoch="0" num_updates="1" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="75" admin_epoch="0" num_updates="1" >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_order first="cloneIcms" id="orderms-SSJ000030315" score="0" then="ms-SSJ000030315" __crm_diff_marker__="added:top" />
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.75.1): ok (rc=0)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:0 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:0 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:0 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:1 (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 339 for probe_complete=true passed
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:35:46 mgraid-S000030311-1 cib: [19797]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-33.raw
Apr 14 20:35:46 mgraid-S000030311-1 cib: [19797]: info: write_cib_contents: Wrote version 0.75.0 of the CIB to disk (digest: c026635d42c9ef3f292fa11d10e16193)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [19797]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.h8iS1x (digest: /var/lib/heartbeat/crm/cib.CARstO)
Apr 14 20:35:46 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 19797 exited with return code 0.
Apr 14 20:35:46 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_replace): 0.73.1 -> 0.75.1 (S_ELECTION)
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: info: need_abort: Aborting on change to epoch
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:35:46 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/163, version=0.75.1): ok (rc=0)
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:47 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 34
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=185
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 35
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=185
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 36
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=185
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:47 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/165, version=0.75.1): ok (rc=0)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 36 (current: 36, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 36 (current: 36, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: The transitioner is already active
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:35:47 mgraid-S000030311-1 crmd: [16635]: WARN: start_subsystem: Client pengine already running as pid 16986
Apr 14 20:35:48 mgraid-S000030311-1 ccm: [16630]: info: client (pid=19870) removed from ccm
Apr 14 20:35:48 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:35:48 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:35:49 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20025) removed from ccm
Apr 14 20:35:50 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20140) removed from ccm
Apr 14 20:35:50 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:35:50 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:50 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:35:50 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=193
Apr 14 20:35:50 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:50 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:35:50 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:35:50 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/O mode
Apr 14 20:35:50 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/crmd/167, version=0.75.1): ok (rc=0)
Apr 14 20:35:50 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:35:51 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/168, version=0.75.1): ok (rc=0)
Apr 14 20:35:51 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/169, version=0.75.1): ok (rc=0)
Apr 14 20:35:51 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:35:51 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/171, version=0.75.1): ok (rc=0)
Apr 14 20:35:51 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" />
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: initialize_join: join-7: Initializing join data (flag=true)
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-7: Sending offer to mgraid-s000030311-0
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: join_make_offer: join-7: Sending offer to mgraid-s000030311-1
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_offer_all: join-7: Waiting on 2 outstanding join acks
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Removed input: 0000000000020000 (R_HAVE_CIB)
Apr 14 20:35:51 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/173, version=0.75.1): ok (rc=0)
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 174 : Parsing CIB options
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:35:51 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20218) removed from ccm
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_OFFER: join-7
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: update_dc: Set DC to mgraid-s000030311-1 (3.0.1)
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: join_query_callback: Respond to join offer join-7
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: join_query_callback: Acknowledging mgraid-s000030311-1 as our DC
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: Processing req from mgraid-s000030311-0
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-7: Welcoming node mgraid-s000030311-0 (ref join_request-crmd-1302838551-47)
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-7
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-7: Still waiting on 1 outstanding offers
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: Processing req from mgraid-s000030311-1
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: mgraid-s000030311-1 has a better generation number than the current max mgraid-s000030311-0
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: do_dc_join_filter_offer: Max generation <generation_tuple validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="1" dc-uuid="856c1f72-7cd1-4906-8183-8be87eef96f2" epoch="75" admin_epoch="0" num_updates="1" />
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: do_dc_join_filter_offer: Their generation <generation_tuple epoch="75" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="1" dc-uuid="856c1f72-7cd1-4906-8183-8be87eef96f2" cib-last-written="Thu Apr 14 20:35:46 2011" />
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: join-7: Welcoming node mgraid-s000030311-1 (ref join_request-crmd-1302838551-106)
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-7
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-7: Integration of 2 peers complete: do_dc_join_filter_offer
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=197
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_finalize: Finializing join-7 for 2 clients
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_finalize: join-7: Syncing the CIB from mgraid-s000030311-1 to the rest of the cluster
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_inputs: Added input: 0000000000020000 (R_HAVE_CIB)
Apr 14 20:35:51 mgraid-S000030311-1 cib: [16631]: debug: sync_our_cib: Syncing CIB to all peers
Apr 14 20:35:51 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/176, version=0.75.1): ok (rc=0)
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-7: Still waiting on 2 integrated nodes
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: finalize_sync_callback: Notifying 2 clients of join-7 results
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: finalize_join_for: join-7: ACK'ing join request from mgraid-s000030311-0, state member
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: finalize_join_for: join-7: ACK'ing join request from mgraid-s000030311-1, state member
Apr 14 20:35:51 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/177, version=0.75.1): ok (rc=0)
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: handle_request: Raising I_JOIN_RESULT: join-7
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_finalize_respond: Confirming join join-7: join_ack_nack
Apr 14 20:35:51 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc mgraid-stonith:1 is LRM_RSC_IDLE
Apr 14 20:35:51 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc SSJ000030316:1 is LRM_RSC_BUSY
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: WARN: msg_to_op(1324): failed to get the value of field lrm_opstatus from a ha_msg
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: msg_to_op: Message follows:
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG: Dumping message with 16 fields
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[0] : [lrm_t=op]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[1] : [lrm_rid=SSJ000030316:1]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[2] : [lrm_op=start]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[3] : [lrm_timeout=300000]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[4] : [lrm_interval=0]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[5] : [lrm_delay=0]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[6] : [lrm_copyparams=1]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[7] : [lrm_t_run=0]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[8] : [lrm_t_rcchange=0]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[9] : [lrm_exec_time=0]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[10] : [lrm_queue_time=0]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[11] : [lrm_targetrc=-1]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[12] : [lrm_app=crmd]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[13] : [lrm_userdata=94:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[14] : [(2)lrm_param=0x6910d0(938 1098)]
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG: Dumping message with 27 fields
Apr 14 20:35:51 mgraid-S000030311-1 crmd: [16635]: info: MSG[0] : [CRM_meta_clone=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[1] : [CRM_meta_notify_slave_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[2] : [CRM_meta_notify_active_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[3] : [CRM_meta_notify_demote_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[4] : [CRM_meta_notify_inactive_resource=SSJ000030316:0 SSJ000030316:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[5] : [ssconf=/var/omneon/config/config.J000030316]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[6] : [CRM_meta_master_node_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[7] : [CRM_meta_notify_stop_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[8] : [CRM_meta_notify_master_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[9] : [CRM_meta_clone_node_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[10] : [CRM_meta_clone_max=2]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[11] : [CRM_meta_notify=true]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[12] : [CRM_meta_notify_start_resource=SSJ000030316:0 SSJ000030316:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[13] : [CRM_meta_notify_stop_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[14] : [crm_feature_set=3.0.1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[15] : [CRM_meta_notify_master_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[16] : [CRM_meta_master_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[17] : [CRM_meta_globally_unique=false]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[18] : [CRM_meta_notify_promote_resource=SSJ000030316:0 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[19] : [CRM_meta_notify_promote_uname=mgraid-s000030311-0 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[20] : [CRM_meta_notify_active_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[21] : [CRM_meta_notify_start_uname=mgraid-s000030311-0 mgraid-s000030311-1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[22] : [CRM_meta_notify_slave_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[23] : [CRM_meta_name=start]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[24] : [ss_resource=SSJ000030316]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[25] : [CRM_meta_notify_demote_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[26] : [CRM_meta_timeout=300000]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[15] : [lrm_callid=15]
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/178, version=0.75.1): ok (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc icms:1 is LRM_RSC_IDLE
Apr 14 20:35:52 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc omserver:1 is LRM_RSC_IDLE
Apr 14 20:35:52 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc SSJ000030313:1 is LRM_RSC_BUSY
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: WARN: msg_to_op(1324): failed to get the value of field lrm_opstatus from a ha_msg
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: msg_to_op: Message follows:
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG: Dumping message with 16 fields
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[0] : [lrm_t=op]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[1] : [lrm_rid=SSJ000030313:1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[2] : [lrm_op=start]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[3] : [lrm_timeout=300000]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[4] : [lrm_interval=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[5] : [lrm_delay=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[6] : [lrm_copyparams=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[7] : [lrm_t_run=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[8] : [lrm_t_rcchange=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[9] : [lrm_exec_time=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[10] : [lrm_queue_time=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[11] : [lrm_targetrc=-1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[12] : [lrm_app=crmd]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[13] : [lrm_userdata=64:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[14] : [(2)lrm_param=0x6552d0(938 1098)]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG: Dumping message with 27 fields
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[0] : [CRM_meta_clone=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[1] : [CRM_meta_notify_slave_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[2] : [CRM_meta_notify_active_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[3] : [CRM_meta_notify_demote_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[4] : [CRM_meta_notify_inactive_resource=SSJ000030313:0 SSJ000030313:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[5] : [ssconf=/var/omneon/config/config.J000030313]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[6] : [CRM_meta_master_node_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[7] : [CRM_meta_notify_stop_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[8] : [CRM_meta_notify_master_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[9] : [CRM_meta_clone_node_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[10] : [CRM_meta_clone_max=2]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[11] : [CRM_meta_notify=true]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[12] : [CRM_meta_notify_start_resource=SSJ000030313:0 SSJ000030313:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[13] : [CRM_meta_notify_stop_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[14] : [crm_feature_set=3.0.1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[15] : [CRM_meta_notify_master_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[16] : [CRM_meta_master_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[17] : [CRM_meta_globally_unique=false]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[18] : [CRM_meta_notify_promote_resource=SSJ000030313:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[19] : [CRM_meta_notify_promote_uname=mgraid-s000030311-1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[20] : [CRM_meta_notify_active_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[21] : [CRM_meta_notify_start_uname=mgraid-s000030311-0 mgraid-s000030311-1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[22] : [CRM_meta_notify_slave_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[23] : [CRM_meta_name=start]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[24] : [ss_resource=SSJ000030313]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[25] : [CRM_meta_notify_demote_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[26] : [CRM_meta_timeout=300000]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[15] : [lrm_callid=14]
Apr 14 20:35:52 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc SSJ000030312:1 is LRM_RSC_BUSY
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: WARN: msg_to_op(1324): failed to get the value of field lrm_opstatus from a ha_msg
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: msg_to_op: Message follows:
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG: Dumping message with 16 fields
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[0] : [lrm_t=op]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[1] : [lrm_rid=SSJ000030312:1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[2] : [lrm_op=start]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[3] : [lrm_timeout=300000]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[4] : [lrm_interval=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[5] : [lrm_delay=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[6] : [lrm_copyparams=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[7] : [lrm_t_run=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[8] : [lrm_t_rcchange=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[9] : [lrm_exec_time=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[10] : [lrm_queue_time=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[11] : [lrm_targetrc=-1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[12] : [lrm_app=crmd]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[13] : [lrm_userdata=123:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[14] : [(2)lrm_param=0x66d210(938 1098)]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG: Dumping message with 27 fields
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[0] : [CRM_meta_clone=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[1] : [CRM_meta_notify_slave_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[2] : [CRM_meta_notify_active_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[3] : [CRM_meta_notify_demote_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[4] : [CRM_meta_notify_inactive_resource=SSJ000030312:0 SSJ000030312:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[5] : [ssconf=/var/omneon/config/config.J000030312]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[6] : [CRM_meta_master_node_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[7] : [CRM_meta_notify_stop_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[8] : [CRM_meta_notify_master_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[9] : [CRM_meta_clone_node_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[10] : [CRM_meta_clone_max=2]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[11] : [CRM_meta_notify=true]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[12] : [CRM_meta_notify_start_resource=SSJ000030312:0 SSJ000030312:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[13] : [CRM_meta_notify_stop_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[14] : [crm_feature_set=3.0.1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[15] : [CRM_meta_notify_master_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[16] : [CRM_meta_master_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[17] : [CRM_meta_globally_unique=false]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[18] : [CRM_meta_notify_promote_resource=SSJ000030312:0 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[19] : [CRM_meta_notify_promote_uname=mgraid-s000030311-0 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[20] : [CRM_meta_notify_active_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[21] : [CRM_meta_notify_start_uname=mgraid-s000030311-0 mgraid-s000030311-1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[22] : [CRM_meta_notify_slave_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[23] : [CRM_meta_name=start]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[24] : [ss_resource=SSJ000030312]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[25] : [CRM_meta_notify_demote_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[26] : [CRM_meta_timeout=300000]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[15] : [lrm_callid=16]
Apr 14 20:35:52 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc SSS000030311:1 is LRM_RSC_BUSY
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: WARN: msg_to_op(1324): failed to get the value of field lrm_opstatus from a ha_msg
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: msg_to_op: Message follows:
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG: Dumping message with 16 fields
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[0] : [lrm_t=op]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[1] : [lrm_rid=SSS000030311:1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[2] : [lrm_op=start]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[3] : [lrm_timeout=300000]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[4] : [lrm_interval=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[5] : [lrm_delay=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[6] : [lrm_copyparams=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[7] : [lrm_t_run=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[8] : [lrm_t_rcchange=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[9] : [lrm_exec_time=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[10] : [lrm_queue_time=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[11] : [lrm_targetrc=-1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[12] : [lrm_app=crmd]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[13] : [lrm_userdata=36:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[14] : [(2)lrm_param=0x6703c0(938 1098)]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG: Dumping message with 27 fields
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[0] : [CRM_meta_clone=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[1] : [CRM_meta_notify_slave_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[2] : [CRM_meta_notify_active_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[3] : [CRM_meta_notify_demote_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[4] : [CRM_meta_notify_inactive_resource=SSS000030311:0 SSS000030311:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[5] : [ssconf=/var/omneon/config/config.S000030311]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[6] : [CRM_meta_master_node_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[7] : [CRM_meta_notify_stop_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[8] : [CRM_meta_notify_master_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[9] : [CRM_meta_clone_node_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[10] : [CRM_meta_clone_max=2]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[11] : [CRM_meta_notify=true]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[12] : [CRM_meta_notify_start_resource=SSS000030311:0 SSS000030311:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[13] : [CRM_meta_notify_stop_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[14] : [crm_feature_set=3.0.1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[15] : [CRM_meta_notify_master_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[16] : [CRM_meta_master_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[17] : [CRM_meta_globally_unique=false]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[18] : [CRM_meta_notify_promote_resource=SSS000030311:0 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[19] : [CRM_meta_notify_promote_uname=mgraid-s000030311-0 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[20] : [CRM_meta_notify_active_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[21] : [CRM_meta_notify_start_uname=mgraid-s000030311-0 mgraid-s000030311-1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[22] : [CRM_meta_notify_slave_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[23] : [CRM_meta_name=start]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[24] : [ss_resource=SSS000030311]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[25] : [CRM_meta_notify_demote_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[26] : [CRM_meta_timeout=300000]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[15] : [lrm_callid=13]
Apr 14 20:35:52 mgraid-S000030311-1 lrmd: [16632]: debug: on_msg_get_state:state of rsc SSJ000030314:1 is LRM_RSC_BUSY
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: WARN: msg_to_op(1324): failed to get the value of field lrm_opstatus from a ha_msg
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: msg_to_op: Message follows:
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG: Dumping message with 16 fields
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[0] : [lrm_t=op]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[1] : [lrm_rid=SSJ000030314:1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[2] : [lrm_op=start]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[3] : [lrm_timeout=300000]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[4] : [lrm_interval=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[5] : [lrm_delay=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[6] : [lrm_copyparams=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[7] : [lrm_t_run=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[8] : [lrm_t_rcchange=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[9] : [lrm_exec_time=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[10] : [lrm_queue_time=0]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[11] : [lrm_targetrc=-1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[12] : [lrm_app=crmd]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[13] : [lrm_userdata=151:3:0:469a0e5c-f535-4c85-84b2-fd971ee76592]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[14] : [(2)lrm_param=0x6c49c0(905 1065)]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG: Dumping message with 27 fields
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[0] : [CRM_meta_clone=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[1] : [CRM_meta_notify_slave_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[2] : [CRM_meta_notify_active_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[3] : [CRM_meta_notify_demote_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[4] : [CRM_meta_notify_inactive_resource=SSJ000030314:0 SSJ000030314:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[5] : [ssconf=/var/omneon/config/config.J000030314]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[6] : [CRM_meta_master_node_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[7] : [CRM_meta_notify_stop_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[8] : [CRM_meta_notify_master_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[9] : [CRM_meta_clone_node_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[10] : [CRM_meta_clone_max=2]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[11] : [CRM_meta_notify=true]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[12] : [CRM_meta_notify_start_resource=SSJ000030314:0 SSJ000030314:1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[13] : [CRM_meta_notify_stop_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[14] : [crm_feature_set=3.0.1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[15] : [CRM_meta_notify_master_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[16] : [CRM_meta_master_max=1]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[17] : [CRM_meta_globally_unique=false]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[18] : [CRM_meta_notify_promote_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[19] : [CRM_meta_notify_promote_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[20] : [CRM_meta_notify_active_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[21] : [CRM_meta_notify_start_uname=mgraid-s000030311-0 mgraid-s000030311-1 ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[22] : [CRM_meta_notify_slave_uname= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[23] : [CRM_meta_name=start]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[24] : [ss_resource=SSJ000030314]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[25] : [CRM_meta_notify_demote_resource= ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[26] : [CRM_meta_timeout=300000]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: MSG[15] : [lrm_callid=17]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: do_cl_join_finalize_respond: join-7: Join complete.  Sending local LRM status to mgraid-s000030311-1
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from mgraid-s000030311-1
Apr 14 20:35:52 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20272) removed from ccm
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_ack: join-7: Updating node state to member for mgraid-s000030311-1
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: join-7: Registered callback for LRM update 180
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='mgraid-s000030311-1']/lrm (/cib/status/node_state[2]/lrm)
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='mgraid-s000030311-1']/lrm (origin=local/crmd/179, version=0.75.2): ok (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_delete): 0.75.1 -> 0.75.2 (S_FINALIZE_JOIN)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='omserver:1_monitor_0'] (omserver:1_monitor_0 on 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=omserver:1_monitor_0, magic=0:0;14:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.2) : Resource op removal
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: erase_xpath_callback: Deletion of "//node_state[@uname='mgraid-s000030311-1']/lrm": ok (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.75.2 -> 0.75.3 (S_FINALIZE_JOIN)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action mgraid-stonith:1_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=mgraid-stonith:1_monitor_0, magic=0:7;12:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.3) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action mgraid-stonith:1_start_0 (8) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action SSJ000030316:1_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=SSJ000030316:1_monitor_0, magic=0:7;17:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.3) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action icms:1_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=icms:1_monitor_0, magic=0:0;13:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.3) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action icms:1_monitor_5000 (18) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action omserver:1_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=omserver:1_monitor_0, magic=0:0;14:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.3) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action omserver:1_monitor_5000 (28) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action SSJ000030313:1_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=SSJ000030313:1_monitor_0, magic=0:7;16:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.3) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action SSJ000030312:1_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=SSJ000030312:1_monitor_0, magic=0:7;18:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.3) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action SSS000030311:1_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=SSS000030311:1_monitor_0, magic=0:7;15:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.3) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030314:1_monitor_0 (6) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: join_update_complete_callback: Join update 180 complete
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-7: Still waiting on 1 finalized nodes
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_ack: join-7: Updating node state to member for mgraid-s000030311-0
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: do_dc_join_ack: join-7: Registered callback for LRM update 182
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='mgraid-s000030311-0']/lrm (/cib/status/node_state[1]/lrm)
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='mgraid-s000030311-0']/lrm (origin=local/crmd/181, version=0.75.4): ok (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_delete): 0.75.3 -> 0.75.4 (S_FINALIZE_JOIN)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='SSJ000030316:0_monitor_0'] (SSJ000030316:0_monitor_0 on f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=SSJ000030316:0_monitor_0, magic=0:7;9:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.4) : Resource op removal
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: erase_xpath_callback: Deletion of "//node_state[@uname='mgraid-s000030311-0']/lrm": ok (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.75.4 -> 0.75.5 (S_FINALIZE_JOIN)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action mgraid-stonith:0_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=mgraid-stonith:0_monitor_0, magic=0:7;4:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.5) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action mgraid-stonith:0_start_0 (7) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action SSJ000030313:0_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=SSJ000030313:0_monitor_0, magic=0:7;8:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.5) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action icms:0_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=icms:0_monitor_0, magic=0:0;5:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.5) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action icms:0_monitor_5000 (15) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action omserver:0_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=omserver:0_monitor_0, magic=0:0;6:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.5) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action omserver:0_monitor_5000 (25) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action SSJ000030316:0_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=SSJ000030316:0_monitor_0, magic=0:7;9:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.5) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action SSJ000030312:0_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=SSJ000030312:0_monitor_0, magic=0:7;10:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.5) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: process_graph_event: Detected action SSS000030311:0_monitor_0 from a different transition: 2 vs. 3
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=SSS000030311:0_monitor_0, magic=0:7;7:2:7:469a0e5c-f535-4c85-84b2-fd971ee76592, cib=0.75.5) : Old event
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030314:0_monitor_0 (4) confirmed on mgraid-s000030311-0 (rc=0)
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: join_update_complete_callback: Join update 182 complete
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: check_join_state: join-7 complete: join_update_complete_callback
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr 14 20:35:52 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: info: find_hash_entry: Creating hash entry for master-SSJ000030314:0
Apr 14 20:35:52 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:0'] does not exist
Apr 14 20:35:52 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:0=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 364 for probe_complete=true passed
Apr 14 20:35:53 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:35:53 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:35:53 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20285) removed from ccm
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: attrd_update: Sent update: (null)=(null) for localhost
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: info: crm_update_quorum: Updating quorum status to true (call=185)
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:0 (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000000000, stalled=true
Apr 14 20:35:53 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/183, version=0.75.5): ok (rc=0)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/185, version=0.75.5): ok (rc=0)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:0 (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:0 (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:0 (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:1 (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:0 (<null>)
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:0'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:0=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 377 for probe_complete=true passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:35:53 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:0'] does not exist
Apr 14 20:35:53 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:0=(null) passed
Apr 14 20:35:54 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20363) removed from ccm
Apr 14 20:35:55 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20376) removed from ccm
Apr 14 20:35:56 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20540) removed from ccm
Apr 14 20:35:57 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20622) removed from ccm
Apr 14 20:35:58 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:35:58 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:35:58 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20652) removed from ccm
Apr 14 20:35:59 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20676) removed from ccm
Apr 14 20:36:00 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20694) removed from ccm
Apr 14 20:36:01 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20700) removed from ccm
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Replaced: 0.75.5 -> 0.77.1 from <null>
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="75" num_updates="5" admin_epoch="0" />
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="77" num_updates="1" admin_epoch="0" >
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:0 (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_location id="ms-SSJ000030315-master-w1" rsc="ms-SSJ000030315" __crm_diff_marker__="added:top" >
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         <rule id="ms-SSJ000030315-master-w1-rule" role="master" score="100" >
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +           <expression attribute="#uname" id="ms-SSJ000030315-master-w1-expression" operation="eq" value="mgraid-s000030311-1" />
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +         </rule>
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       </rsc_location>
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.77.1): ok (rc=0)
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_replace): 0.75.5 -> 0.77.1 (S_POLICY_ENGINE)
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: info: need_abort: Aborting on change to epoch
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:0 (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:0 (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:0 (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:1 (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:0 (<null>)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:0'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:0=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 403 for probe_complete=true passed
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:36:01 mgraid-S000030311-1 cib: [20743]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-34.raw
Apr 14 20:36:01 mgraid-S000030311-1 cib: [20743]: info: write_cib_contents: Wrote version 0.77.0 of the CIB to disk (digest: 33775f3b69643b1e09b572dbcea14f4c)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [20743]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.ohueN3 (digest: /var/lib/heartbeat/crm/cib.zTfAhX)
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 20743 exited with return code 0.
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:36:01 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: info: update_dc: Unset DC mgraid-s000030311-1
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 37
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=208
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000000000, stalled=true
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 37 (current: 37, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000000000, stalled=true
Apr 14 20:36:01 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:01 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/186, version=0.77.1): ok (rc=0)
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:0'] does not exist
Apr 14 20:36:02 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:0=(null) passed
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 37 (current: 37, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: WARN: register_fsa_input_adv: do_te_invoke stalled the FSA with pending inputs
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(171)]: input I_ELECTION_DC raised by do_election_check()	(cause=C_FSA_INTERNAL)
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=1, fsa_actions=0x800000000000, stalled=true
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(171)]: input I_ELECTION_DC raised by do_election_check()	(cause=C_FSA_INTERNAL)
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: The transitioner is already active
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:36:02 mgraid-S000030311-1 crmd: [16635]: WARN: start_subsystem: Client pengine already running as pid 16986
Apr 14 20:36:02 mgraid-S000030311-1 cib: [16631]: debug: activateCibXml: Triggering CIB write for cib_replace op
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: cib_replace_notify: Local-only Replace: 0.79.1 from <null>
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: do_cib_replaced: Sending full refresh
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:0 (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: - <cib epoch="77" admin_epoch="0" num_updates="1" />
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + <cib epoch="79" admin_epoch="0" num_updates="1" >
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   <configuration >
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     <constraints >
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +       <rsc_order first="cloneIcms" id="orderms-SSJ000030316" score="0" then="ms-SSJ000030316" __crm_diff_marker__="added:top" />
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +     </constraints>
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: +   </configuration>
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: log_data_element: cib:diff: + </cib>
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.79.1): ok (rc=0)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:0 (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:0 (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:0 (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:1 (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:0 (<null>)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:0'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:0=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 429 for probe_complete=true passed
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: Forking temp process write_cib_contents
Apr 14 20:36:03 mgraid-S000030311-1 cib: [20889]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-35.raw
Apr 14 20:36:03 mgraid-S000030311-1 cib: [20889]: info: write_cib_contents: Wrote version 0.79.0 of the CIB to disk (digest: b3c0a2a03b87b5716aa178ae67acfe92)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [20889]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.MQAQ2a (digest: /var/lib/heartbeat/crm/cib.rOaYD7)
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: info: Managed write_cib_contents process 20889 exited with return code 0.
Apr 14 20:36:03 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:03 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:1=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:1=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:1=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:1=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:1=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-1
Apr 14 20:36:03 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:0'] does not exist
Apr 14 20:36:03 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:0=(null) passed
Apr 14 20:36:04 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20913) removed from ccm
Apr 14 20:36:05 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20922) removed from ccm
Apr 14 20:36:05 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:36:05 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:36:05 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:36:05 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=210
Apr 14 20:36:05 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:36:05 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:36:05 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:36:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/O mode
Apr 14 20:36:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/crmd/188, version=0.79.1): ok (rc=0)
Apr 14 20:36:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:36:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/189, version=0.79.1): ok (rc=0)
Apr 14 20:36:05 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/190, version=0.79.1): ok (rc=0)
Apr 14 20:36:05 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr 14 20:36:05 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:36:06 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/192, version=0.79.1): ok (rc=0)
Apr 14 20:36:06 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" />
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_replace): 0.77.1 -> 0.79.1 (S_INTEGRATION)
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: info: need_abort: Aborting on change to epoch
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
Apr 14 20:36:06 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/194, version=0.79.1): ok (rc=0)
Apr 14 20:36:06 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20929) removed from ccm
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-1 (uuid: 856c1f72-7cd1-4906-8183-8be87eef96f2)
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: populate_cib_nodes_ha: Node: mgraid-s000030311-0 (uuid: f4e5e15c-d06b-4e37-89b9-4621af05128f)
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-0: true (overwrite=true) hash_size=2
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: ghash_update_cib_node: Updating mgraid-s000030311-1: true (overwrite=true) hash_size=2
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 195 : Parsing CIB options
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 38
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=215
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:06 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/196, version=0.79.1): ok (rc=0)
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 38 (current: 38, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:36:06 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:07 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20941) removed from ccm
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 38 (current: 38, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: WARN: register_fsa_input_adv: do_te_invoke stalled the FSA with pending inputs
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(179)]: input I_ELECTION_DC raised by do_election_check()	(cause=C_FSA_INTERNAL)
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=1, fsa_actions=0x800000010000, stalled=true
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(179)]: input I_ELECTION_DC raised by do_election_check()	(cause=C_FSA_INTERNAL)
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: The transitioner is already active
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:36:07 mgraid-S000030311-1 crmd: [16635]: WARN: start_subsystem: Client pengine already running as pid 16986
Apr 14 20:36:08 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:08 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:08 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20950) removed from ccm
Apr 14 20:36:09 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20961) removed from ccm
Apr 14 20:36:10 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20963) removed from ccm
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=217
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:36:10 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/O mode
Apr 14 20:36:10 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/crmd/198, version=0.79.1): ok (rc=0)
Apr 14 20:36:10 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:36:10 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/199, version=0.79.1): ok (rc=0)
Apr 14 20:36:10 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/200, version=0.79.1): ok (rc=0)
Apr 14 20:36:10 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:36:10 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/202, version=0.79.1): ok (rc=0)
Apr 14 20:36:10 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" />
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:10 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/204, version=0.79.1): ok (rc=0)
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 205 : Parsing CIB options
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:36:10 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:11 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20976) removed from ccm
Apr 14 20:36:12 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20985) removed from ccm
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
ss[18693]:	2011/04/14_20:36:13 DEBUG: SSS000030311: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.S000030311 -i1
ss[18700]:	2011/04/14_20:36:13 DEBUG: SSJ000030316: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030316 -i1
ss[18703]:	2011/04/14_20:36:13 DEBUG: SSJ000030312: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030312 -i1
ss[18693]:	2011/04/14_20:36:13 DEBUG: SSS000030311: Exit code 0
ss[18695]:	2011/04/14_20:36:13 DEBUG: SSJ000030313: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030313 -i1
ss[18693]:	2011/04/14_20:36:13 DEBUG: SSS000030311: Command output: 19125
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSS000030311:1:start:stdout) 19125

ss[18693]:	2011/04/14_20:36:13 DEBUG: ss_status() START SSS000030311
ss[18700]:	2011/04/14_20:36:13 DEBUG: SSJ000030316: Exit code 0
ss[18700]:	2011/04/14_20:36:13 DEBUG: SSJ000030316: Command output: 19150
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030316:1:start:stdout) 19150

ss[18693]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() START SSS000030311
ss[18703]:	2011/04/14_20:36:13 DEBUG: SSJ000030312: Exit code 0
ss[18700]:	2011/04/14_20:36:13 DEBUG: ss_status() START SSJ000030316
ss[18703]:	2011/04/14_20:36:13 DEBUG: SSJ000030312: Command output: 19163
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030312:1:start:stdout) 19163

ss[18695]:	2011/04/14_20:36:13 DEBUG: SSJ000030313: Exit code 0
ss[18693]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() ssadm return is 0
ss[18695]:	2011/04/14_20:36:13 DEBUG: SSJ000030313: Command output: 19175
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030313:1:start:stdout) 19175

ss[18700]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() START SSJ000030316
ss[18703]:	2011/04/14_20:36:13 DEBUG: ss_status() START SSJ000030312
ss[18695]:	2011/04/14_20:36:13 DEBUG: ss_status() START SSJ000030313
ss[18693]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() ssadm SS_STATE is: SHADOW
ss[18703]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() START SSJ000030312
ss[18700]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() ssadm return is 0
ss[18693]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() END SSS000030311   - SHADOW
ss[18695]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() START SSJ000030313
ss[18693]:	2011/04/14_20:36:13 DEBUG: ss_status() returning 0
ss[18703]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() ssadm return is 0
ss[18700]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() ssadm SS_STATE is: SHADOW
ss[18693]:	2011/04/14_20:36:13 DEBUG: SSS000030311: Calling //sbin/crm_master -Q -l reboot -v 500
ss[18695]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() ssadm return is 0
ss[18700]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() END SSJ000030316   - SHADOW
ss[18703]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() ssadm SS_STATE is: SHADOW
ss[18700]:	2011/04/14_20:36:13 DEBUG: ss_status() returning 0
ss[18695]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() ssadm SS_STATE is: SHADOW
ss[18703]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() END SSJ000030312   - SHADOW
ss[18700]:	2011/04/14_20:36:13 DEBUG: SSJ000030316: Calling //sbin/crm_master -Q -l reboot -v 500
ss[18695]:	2011/04/14_20:36:13 DEBUG: ss_set_status_variables() END SSJ000030313   - SHADOW
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
ss[18703]:	2011/04/14_20:36:13 DEBUG: ss_status() returning 0
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: debug: log_data_element: query_node_uuid: Result section <nodes >
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: debug: log_data_element: query_node_uuid: Result section   <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" />
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: debug: log_data_element: query_node_uuid: Result section   <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" />
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: debug: log_data_element: query_node_uuid: Result section </nodes>
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: info: determine_host: Mapped mgraid-S000030311-1 to 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crm_attribute: master-SSS000030311:1=500
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: 500, Current: (null), Stored: (null)
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: New value of master-SSS000030311:1 is 500
ss[18695]:	2011/04/14_20:36:13 DEBUG: ss_status() returning 0
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSS000030311:1 (500)
Apr 14 20:36:13 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] does not exist
Apr 14 20:36:13 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
ss[18703]:	2011/04/14_20:36:13 DEBUG: SSJ000030312: Calling //sbin/crm_master -Q -l reboot -v 500
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: info: attrd_perform_update: Sent update 445: master-SSS000030311:1=500
ss[18695]:	2011/04/14_20:36:13 DEBUG: SSJ000030313: Calling //sbin/crm_master -Q -l reboot -v 500
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.1 -> 0.79.2 (S_INTEGRATION)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=856c1f72-7cd1-4906-8183-8be87eef96f2, magic=NA, cib=0.79.2) : Transient attribute: update
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-master-SSS000030311:1" name="master-SSS000030311:1" value="500" __crm_diff_marker__="added:top" />
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 445 for master-SSS000030311:1=500 passed
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: debug: log_data_element: query_node_uuid: Result section <nodes >
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: debug: log_data_element: query_node_uuid: Result section   <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" />
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: debug: log_data_element: query_node_uuid: Result section   <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" />
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: debug: log_data_element: query_node_uuid: Result section <nodes >
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: debug: attrd_update: Sent update: master-SSS000030311:1=500 for mgraid-S000030311-1
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: debug: log_data_element: query_node_uuid: Result section </nodes>
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: debug: log_data_element: query_node_uuid: Result section   <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" />
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: info: main: Update master-SSS000030311:1=500 sent via attrd
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: info: determine_host: Mapped mgraid-S000030311-1 to 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: debug: log_data_element: query_node_uuid: Result section   <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" />
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: debug: log_data_element: query_node_uuid: Result section <nodes >
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: debug: log_data_element: query_node_uuid: Result section </nodes>
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21129]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: debug: log_data_element: query_node_uuid: Result section   <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" />
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: info: determine_host: Mapped mgraid-S000030311-1 to 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: debug: log_data_element: query_node_uuid: Result section   <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" />
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: debug: log_data_element: query_node_uuid: Result section </nodes>
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: info: determine_host: Mapped mgraid-S000030311-1 to 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: debug: attrd_update: Sent update: master-SSJ000030316:1=500 for mgraid-S000030311-1
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: info: main: Update master-SSJ000030316:1=500 sent via attrd
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: debug: attrd_update: Sent update: master-SSJ000030313:1=500 for mgraid-S000030311-1
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crm_attribute: master-SSJ000030316:1=500
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: info: main: Update master-SSJ000030313:1=500 sent via attrd
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: debug: attrd_update: Sent update: master-SSJ000030312:1=500 for mgraid-S000030311-1
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: 500, Current: (null), Stored: (null)
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21150]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: info: main: Update master-SSJ000030312:1=500 sent via attrd
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21156]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: New value of master-SSJ000030316:1 is 500
Apr 14 20:36:13 mgraid-S000030311-1 crm_attribute: [21158]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030316:1 (500)
Apr 14 20:36:13 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] does not exist
Apr 14 20:36:13 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: info: attrd_perform_update: Sent update 448: master-SSJ000030316:1=500
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: xmlfromIPC: Peer disconnected
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crm_attribute: master-SSJ000030313:1=500
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: 500, Current: (null), Stored: (null)
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: New value of master-SSJ000030313:1 is 500
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030313:1 (500)
ss[18693]:	2011/04/14_20:36:13 DEBUG: SSS000030311: Exit code 0
ss[18695]:	2011/04/14_20:36:13 DEBUG: SSJ000030313: Exit code 0
ss[18700]:	2011/04/14_20:36:13 DEBUG: SSJ000030316: Exit code 0
ss[18703]:	2011/04/14_20:36:13 DEBUG: SSJ000030312: Exit code 0
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.2 -> 0.79.3 (S_INTEGRATION)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=856c1f72-7cd1-4906-8183-8be87eef96f2, magic=NA, cib=0.79.3) : Transient attribute: update
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-master-SSJ000030316:1" name="master-SSJ000030316:1" value="500" __crm_diff_marker__="added:top" />
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
ss[18693]:	2011/04/14_20:36:13 DEBUG: SSS000030311: Command output: 
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSS000030311:1:start:stdout) 

ss[18695]:	2011/04/14_20:36:13 DEBUG: SSJ000030313: Command output: 
Apr 14 20:36:13 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] does not exist
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030313:1:start:stdout) 

Apr 14 20:36:13 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: info: attrd_perform_update: Sent update 451: master-SSJ000030313:1=500
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crm_attribute: master-SSJ000030312:1=500
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: 500, Current: (null), Stored: (null)
ss[18700]:	2011/04/14_20:36:13 DEBUG: SSJ000030316: Command output: 
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: New value of master-SSJ000030312:1 is 500
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030316:1:start:stdout) 

Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030312:1 (500)
ss[18703]:	2011/04/14_20:36:13 DEBUG: SSJ000030312: Command output: 
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.3 -> 0.79.4 (S_INTEGRATION)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:13 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] does not exist
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030312:1:start:stdout) 

Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=856c1f72-7cd1-4906-8183-8be87eef96f2, magic=NA, cib=0.79.4) : Transient attribute: update
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:36:13 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-master-SSJ000030313:1" name="master-SSJ000030313:1" value="500" __crm_diff_marker__="added:top" />
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: info: attrd_perform_update: Sent update 454: master-SSJ000030312:1=500
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 448 for master-SSJ000030316:1=500 passed
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 451 for master-SSJ000030313:1=500 passed
ss[18695]:	2011/04/14_20:36:13 INFO: ss_start() END - 0
ss[18693]:	2011/04/14_20:36:13 INFO: ss_start() END - 0
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.4 -> 0.79.5 (S_INTEGRATION)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
ss[18700]:	2011/04/14_20:36:13 INFO: ss_start() END - 0
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=856c1f72-7cd1-4906-8183-8be87eef96f2, magic=NA, cib=0.79.5) : Transient attribute: update
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-master-SSJ000030312:1" name="master-SSJ000030312:1" value="500" __crm_diff_marker__="added:top" />
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
ss[18703]:	2011/04/14_20:36:13 INFO: ss_start() END - 0
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: Managed SSS000030311:1:start process 18693 exited with return code 0.
Apr 14 20:36:13 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 454 for master-SSJ000030312:1=500 passed
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: Managed SSJ000030313:1:start process 18695 exited with return code 0.
Apr 14 20:36:13 mgraid-S000030311-1 ccm: [16630]: info: client (pid=20994) removed from ccm
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: Managed SSJ000030316:1:start process 18700 exited with return code 0.
Apr 14 20:36:13 mgraid-S000030311-1 lrmd: [16632]: info: Managed SSJ000030312:1:start process 18703 exited with return code 0.
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation SSS000030311:1_start_0 (call=13, rc=0, cib-update=206, confirmed=true) ok
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation SSJ000030313:1_start_0 (call=14, rc=0, cib-update=207, confirmed=true) ok
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation SSJ000030316:1_start_0 (call=15, rc=0, cib-update=208, confirmed=true) ok
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation SSJ000030312:1_start_0 (call=16, rc=0, cib-update=209, confirmed=true) ok
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=29, Pending=10, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.5 -> 0.79.6 (S_INTEGRATION)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSS000030311:1_start_0 (36) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=30, Pending=9, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.6 -> 0.79.7 (S_INTEGRATION)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030313:1_start_0 (64) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.7 -> 0.79.8 (S_INTEGRATION)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030316:1_start_0 (94) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=32, Pending=7, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.8 -> 0.79.9 (S_INTEGRATION)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030312:1_start_0 (123) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:36:13 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=33, Pending=6, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.9 -> 0.79.10 (S_INTEGRATION)
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=f4e5e15c-d06b-4e37-89b9-4621af05128f, magic=NA, cib=0.79.10) : Transient attribute: update
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="f4e5e15c-d06b-4e37-89b9-4621af05128f" >
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-f4e5e15c-d06b-4e37-89b9-4621af05128f" >
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-f4e5e15c-d06b-4e37-89b9-4621af05128f-master-SSS000030311:0" name="master-SSS000030311:0" value="500" __crm_diff_marker__="added:top" />
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.10 -> 0.79.11 (S_INTEGRATION)
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=f4e5e15c-d06b-4e37-89b9-4621af05128f, magic=NA, cib=0.79.11) : Transient attribute: update
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="f4e5e15c-d06b-4e37-89b9-4621af05128f" >
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-f4e5e15c-d06b-4e37-89b9-4621af05128f" >
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-f4e5e15c-d06b-4e37-89b9-4621af05128f-master-SSJ000030313:0" name="master-SSJ000030313:0" value="500" __crm_diff_marker__="added:top" />
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=33, Pending=6, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.11 -> 0.79.12 (S_INTEGRATION)
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=f4e5e15c-d06b-4e37-89b9-4621af05128f, magic=NA, cib=0.79.12) : Transient attribute: update
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="f4e5e15c-d06b-4e37-89b9-4621af05128f" >
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-f4e5e15c-d06b-4e37-89b9-4621af05128f" >
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-f4e5e15c-d06b-4e37-89b9-4621af05128f-master-SSJ000030316:0" name="master-SSJ000030316:0" value="500" __crm_diff_marker__="added:top" />
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=33, Pending=6, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: WARN: cib_process_diff: Diff 0.79.6 -> 0.79.7 not applied to 0.79.13: current "num_updates" is greater than required
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.12 -> 0.79.13 (S_INTEGRATION)
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=f4e5e15c-d06b-4e37-89b9-4621af05128f, magic=NA, cib=0.79.13) : Transient attribute: update
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="f4e5e15c-d06b-4e37-89b9-4621af05128f" >
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-f4e5e15c-d06b-4e37-89b9-4621af05128f" >
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-f4e5e15c-d06b-4e37-89b9-4621af05128f-master-SSJ000030312:0" name="master-SSJ000030312:0" value="500" __crm_diff_marker__="added:top" />
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=33, Pending=6, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: WARN: cib_process_diff: Diff 0.79.7 -> 0.79.8 not applied to 0.79.13: current "num_updates" is greater than required
Apr 14 20:36:14 mgraid-S000030311-1 ccm: [16630]: info: client (pid=21221) removed from ccm
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: WARN: cib_process_diff: Diff 0.79.8 -> 0.79.9 not applied to 0.79.13: current "num_updates" is greater than required
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: WARN: cib_process_diff: Diff 0.79.9 -> 0.79.10 not applied to 0.79.13: current "num_updates" is greater than required
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
ss[19197]:	2011/04/14_20:36:14 DEBUG: SSJ000030314: Calling /usr/bin/pgrep -f /opt/omneon/bin/SliceServer -d -c/var/omneon/config/config.J000030314 -i1
ss[19197]:	2011/04/14_20:36:14 DEBUG: SSJ000030314: Exit code 0
ss[19197]:	2011/04/14_20:36:14 DEBUG: SSJ000030314: Command output: 19289
Apr 14 20:36:14 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030314:1:start:stdout) 19289

ss[19197]:	2011/04/14_20:36:14 DEBUG: ss_status() START SSJ000030314
ss[19197]:	2011/04/14_20:36:14 DEBUG: ss_set_status_variables() START SSJ000030314
ss[19197]:	2011/04/14_20:36:14 DEBUG: ss_set_status_variables() ssadm return is 0
ss[19197]:	2011/04/14_20:36:14 DEBUG: ss_set_status_variables() ssadm SS_STATE is: SHADOW
ss[19197]:	2011/04/14_20:36:14 DEBUG: ss_set_status_variables() END SSJ000030314   - SHADOW
ss[19197]:	2011/04/14_20:36:14 DEBUG: ss_status() returning 0
ss[19197]:	2011/04/14_20:36:14 DEBUG: SSJ000030314: Calling //sbin/crm_master -Q -l reboot -v 500
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: debug: cib_native_signon_raw: Connection to CIB successful
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: debug: log_data_element: query_node_uuid: Result section <nodes >
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: debug: log_data_element: query_node_uuid: Result section   <node id="f4e5e15c-d06b-4e37-89b9-4621af05128f" uname="mgraid-s000030311-0" type="normal" />
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: debug: log_data_element: query_node_uuid: Result section   <node id="856c1f72-7cd1-4906-8183-8be87eef96f2" uname="mgraid-s000030311-1" type="normal" />
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: debug: log_data_element: query_node_uuid: Result section </nodes>
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: info: determine_host: Mapped mgraid-S000030311-1 to 856c1f72-7cd1-4906-8183-8be87eef96f2
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: update message from crm_attribute: master-SSJ000030314:1=500
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: Supplied: 500, Current: (null), Stored: (null)
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: debug: attrd_local_callback: New value of master-SSJ000030314:1 is 500
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: info: attrd_trigger_update: Sending flush op to all hosts for: master-SSJ000030314:1 (500)
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] does not exist
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: info: attrd_perform_update: Sent update 461: master-SSJ000030314:1=500
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: debug: attrd_update: Sent update: master-SSJ000030314:1=500 for mgraid-S000030311-1
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: info: main: Update master-SSJ000030314:1=500 sent via attrd
Apr 14 20:36:14 mgraid-S000030311-1 crm_attribute: [21270]: debug: cib_native_signoff: Signing out of the CIB Service
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.13 -> 0.79.14 (S_INTEGRATION)
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=856c1f72-7cd1-4906-8183-8be87eef96f2, magic=NA, cib=0.79.14) : Transient attribute: update
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-856c1f72-7cd1-4906-8183-8be87eef96f2" >
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-master-SSJ000030314:1" name="master-SSJ000030314:1" value="500" __crm_diff_marker__="added:top" />
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:36:14 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 461 for master-SSJ000030314:1=500 passed
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
ss[19197]:	2011/04/14_20:36:14 DEBUG: SSJ000030314: Exit code 0
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=33, Pending=6, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
ss[19197]:	2011/04/14_20:36:14 DEBUG: SSJ000030314: Command output: 
Apr 14 20:36:14 mgraid-S000030311-1 lrmd: [16632]: info: RA output: (SSJ000030314:1:start:stdout) 

ss[19197]:	2011/04/14_20:36:14 INFO: ss_start() END - 0
Apr 14 20:36:14 mgraid-S000030311-1 lrmd: [16632]: info: Managed SSJ000030314:1:start process 19197 exited with return code 0.
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: info: process_lrm_event: LRM operation SSJ000030314:1_start_0 (call=17, rc=0, cib-update=210, confirmed=true) ok
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=33, Pending=6, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.14 -> 0.79.15 (S_INTEGRATION)
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: info: match_graph_event: Action SSJ000030314:1_start_0 (151) confirmed on mgraid-s000030311-1 (rc=0)
Apr 14 20:36:14 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=34, Pending=5, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: debug: sync_our_cib: Syncing CIB to mgraid-s000030311-0
Apr 14 20:36:14 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=mgraid-s000030311-0/mgraid-s000030311-0/(null), version=0.79.15): ok (rc=0)
Apr 14 20:36:15 mgraid-S000030311-1 ccm: [16630]: info: client (pid=21230) removed from ccm
Apr 14 20:36:15 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:15 mgraid-S000030311-1 cib: [16631]: WARN: cib_process_diff: Diff 0.79.10 -> 0.79.11 not applied to 0.79.16: current "num_updates" is greater than required
Apr 14 20:36:15 mgraid-S000030311-1 crmd: [16635]: debug: te_update_diff: Processing diff (cib_modify): 0.79.15 -> 0.79.16 (S_INTEGRATION)
Apr 14 20:36:15 mgraid-S000030311-1 crmd: [16635]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Apr 14 20:36:15 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=f4e5e15c-d06b-4e37-89b9-4621af05128f, magic=NA, cib=0.79.16) : Transient attribute: update
Apr 14 20:36:15 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="f4e5e15c-d06b-4e37-89b9-4621af05128f" >
Apr 14 20:36:15 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-f4e5e15c-d06b-4e37-89b9-4621af05128f" >
Apr 14 20:36:15 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-f4e5e15c-d06b-4e37-89b9-4621af05128f-master-SSJ000030314:0" name="master-SSJ000030314:0" value="500" __crm_diff_marker__="added:top" />
Apr 14 20:36:15 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Apr 14 20:36:15 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Apr 14 20:36:15 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=34, Pending=5, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:0'] does not exist
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:0=(null) passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:0'] does not exist
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030316:0=(null) passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:0'] does not exist
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030313:0=(null) passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030316:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-master-SSJ000030316:1" name="master-SSJ000030316:1" value="500" />
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:0'] does not exist
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSS000030311:0=(null) passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030313:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[4])
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-master-SSJ000030313:1" name="master-SSJ000030313:1" value="500" />
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='terminate'] does not exist
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for terminate=(null) passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:0'] does not exist
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030312:0=(null) passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSS000030311:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-master-SSS000030311:1" name="master-SSS000030311:1" value="500" />
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:0'] does not exist
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for master-SSJ000030314:0=(null) passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030312:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[5])
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-master-SSJ000030312:1" name="master-SSJ000030312:1" value="500" />
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='shutdown'] does not exist
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='master-SSJ000030314:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[6])
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-master-SSJ000030314:1" name="master-SSJ000030314:1" value="500" />
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 466 for master-SSJ000030316:1=500 passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 469 for master-SSJ000030313:1=500 passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 473 for master-SSS000030311:1=500 passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 476 for master-SSJ000030312:1=500 passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 479 for master-SSJ000030314:1=500 passed
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: info: attrd_ha_callback: flush message from mgraid-s000030311-0
Apr 14 20:36:16 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='856c1f72-7cd1-4906-8183-8be87eef96f2']//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="status-856c1f72-7cd1-4906-8183-8be87eef96f2-probe_complete" name="probe_complete" value="true" />
Apr 14 20:36:16 mgraid-S000030311-1 attrd: [16634]: debug: attrd_cib_callback: Update 481 for probe_complete=true passed
Apr 14 20:36:16 mgraid-S000030311-1 ccm: [16630]: info: client (pid=21291) removed from ccm
Apr 14 20:36:17 mgraid-S000030311-1 ccm: [16630]: info: client (pid=21294) removed from ccm
Apr 14 20:36:18 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:18 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:18 mgraid-S000030311-1 ccm: [16630]: info: client (pid=21343) removed from ccm
Apr 14 20:36:23 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:23 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:28 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:28 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:33 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:33 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:38 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:38 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:43 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:43 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:48 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:48 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:53 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:53 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:36:58 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:36:58 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:03 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:03 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: info: do_election_count_vote: Election 3 (owner: f4e5e15c-d06b-4e37-89b9-4621af05128f) pass: vote from mgraid-s000030311-0 (Age)
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Ignore election check: we not in an election
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: WARN: register_fsa_input_adv: do_te_invoke stalled the FSA with pending inputs
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(186)]: input I_ELECTION raised by do_election_count_vote()	(cause=C_FSA_INTERNAL)
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=1, fsa_actions=0x800000010000, stalled=true
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(186)]: input I_ELECTION raised by do_election_count_vote()	(cause=C_FSA_INTERNAL)
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 39
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=225
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:37:07 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=34, Pending=5, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 39 (current: 39, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=34, Pending=5, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:37:08 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:08 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 39 (current: 39, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: WARN: register_fsa_input_adv: do_te_invoke stalled the FSA with pending inputs
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(192)]: input I_ELECTION_DC raised by do_election_check()	(cause=C_FSA_INTERNAL)
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=1, fsa_actions=0x800000010000, stalled=true
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(192)]: input I_ELECTION_DC raised by do_election_check()	(cause=C_FSA_INTERNAL)
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: The transitioner is already active
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:37:08 mgraid-S000030311-1 crmd: [16635]: WARN: start_subsystem: Client pengine already running as pid 16986
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=227
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:37:11 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/O mode
Apr 14 20:37:11 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/crmd/211, version=0.79.16): ok (rc=0)
Apr 14 20:37:11 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:37:11 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/212, version=0.79.16): ok (rc=0)
Apr 14 20:37:11 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/213, version=0.79.16): ok (rc=0)
Apr 14 20:37:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:37:11 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/215, version=0.79.16): ok (rc=0)
Apr 14 20:37:11 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" />
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=34, Pending=5, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:37:11 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/217, version=0.79.16): ok (rc=0)
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 218 : Parsing CIB options
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:37:11 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=34, Pending=5, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:37:13 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:13 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:18 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:18 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:23 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:23 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:28 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:28 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:33 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:33 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:38 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:38 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:43 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:43 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:48 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:48 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:53 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:53 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:37:58 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:37:58 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:38:03 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:38:03 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:38:08 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:38:08 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: info: do_election_count_vote: Election 4 (owner: f4e5e15c-d06b-4e37-89b9-4621af05128f) pass: vote from mgraid-s000030311-0 (Age)
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Ignore election check: we not in an election
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: WARN: register_fsa_input_adv: do_te_invoke stalled the FSA with pending inputs
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(197)]: input I_ELECTION raised by do_election_count_vote()	(cause=C_FSA_INTERNAL)
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=1, fsa_actions=0x800000010000, stalled=true
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(197)]: input I_ELECTION raised by do_election_count_vote()	(cause=C_FSA_INTERNAL)
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_vote: Started election 40
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=230
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=34, Pending=5, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Created voted hash
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 40 (current: 40, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed vote from mgraid-s000030311-1 (Recorded)
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:38:08 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=34, Pending=5, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_count_vote: Election 40 (current: 40, owner: 856c1f72-7cd1-4906-8183-8be87eef96f2): Processed no-vote from mgraid-s000030311-0 (Recorded)
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: do_election_check: Destroying voted hash
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: WARN: register_fsa_input_adv: do_te_invoke stalled the FSA with pending inputs
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(203)]: input I_ELECTION_DC raised by do_election_check()	(cause=C_FSA_INTERNAL)
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=1, fsa_actions=0x800000010000, stalled=true
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: fsa_dump_queue: queue[0(203)]: input I_ELECTION_DC raised by do_election_check()	(cause=C_FSA_INTERNAL)
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: do_te_control: The transitioner is already active
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: info: start_subsystem: Starting sub-system "pengine"
Apr 14 20:38:09 mgraid-S000030311-1 crmd: [16635]: WARN: start_subsystem: Client pengine already running as pid 16986
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=232
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: info: do_dc_takeover: Taking over DC status for this partition
Apr 14 20:38:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/O mode
Apr 14 20:38:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/crmd/219, version=0.79.16): ok (rc=0)
Apr 14 20:38:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_readwrite: We are now in R/W mode
Apr 14 20:38:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/220, version=0.79.16): ok (rc=0)
Apr 14 20:38:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/221, version=0.79.16): ok (rc=0)
Apr 14 20:38:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" />
Apr 14 20:38:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/223, version=0.79.16): ok (rc=0)
Apr 14 20:38:12 mgraid-S000030311-1 cib: [16631]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="Heartbeat" />
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=34, Pending=5, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:38:12 mgraid-S000030311-1 cib: [16631]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/225, version=0.79.16): ok (rc=0)
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: config_query_callback: Call 226 : Parsing CIB options
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '15min' for cluster option 'cluster-recheck-interval'
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2min' for cluster option 'election-timeout'
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '20min' for cluster option 'shutdown-escalation'
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '3min' for cluster option 'crmd-integration-timeout'
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '30min' for cluster option 'crmd-finalization-timeout'
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: cluster_option: Using default value '2' for cluster option 'expected-quorum-votes'
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: info: config_query_callback: Checking for expired actions every 900000ms
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: do_te_invoke: Cancelling the transition: active
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=0) : Peer Cancelled
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x800000010000, stalled=true
Apr 14 20:38:12 mgraid-S000030311-1 crmd: [16635]: debug: run_graph: Transition 3 (Complete=34, Pending=5, Fired=0, Skipped=34, Incomplete=45, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Apr 14 20:38:13 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:38:13 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:38:18 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:38:18 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
Apr 14 20:38:23 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:icms:1:9: monitor
Apr 14 20:38:23 mgraid-S000030311-1 lrmd: [16632]: debug: rsc:omserver:1:10: monitor
