[ClusterLabs] How to avoid stopping ordered resources on cleanup?

CART Andreas andreas.cart at sonorys.at
Fri Sep 15 11:23:54 EDT 2017


Hello

I think there is a more general misunderstanding on my side.
I reproduced the "problem" with a very simple test-cluster containing just 2 dummy resources:


Pacemaker Nodes:

deneb682 deneb683



Resources:

Resource: Res1 (class=ocf provider=pacemaker type=Dummy)

  Operations: start interval=0s timeout=20 (Res1-start-interval-0s)

              stop interval=0s timeout=20 (Res1-stop-interval-0s)

              monitor interval=10 timeout=20 (Res1-monitor-interval-10)

Resource: Res2 (class=ocf provider=pacemaker type=Dummy)

  Operations: start interval=0s timeout=20 (Res2-start-interval-0s)

              stop interval=0s timeout=20 (Res2-stop-interval-0s)

              monitor interval=10 timeout=20 (Res2-monitor-interval-10)



Ordering Constraints:

  start Res1 then start Res2 (kind:Mandatory) (id:order-Res1-Res2-mandatory)



Cluster Properties:

cluster-infrastructure: cman

default-resource-stickiness: 100

no-quorum-policy: ignore

symmetric-cluster: true

When I call "pcs resource cleanup Res1" this will result in an interruption of service at the side of Res2 (i.e. stop Res2 ...)
My - unconfirmed - assumption was, that pacemaker would first detect the current state of the resource(s) by calling monitor and then decide if there are any actions to be performed.
But from reading the logfiles I would interpret that Res1 is temporarily removed from the cib and re-inserted again. And this results in stopping Res2 until Res1 has confirmed state "started".

As I interpret the documentation it would be possible to avoid this behaviour by configuring the order constraint with kind=Optional.
But I am not sure if this would result in any other undeserved side effects. (e.g on reverse order when stopping)

Another work-around seems to be setting the dependent resource to unmanaged, perform the cleanup and then set it back to managed.

And I wonder if "pcs resource failcount reset" would do the trick WITHOUT any actions being performed if no change in state is necessary.
But I think to remember that we already tried this now and then and sometimes such a failed resource was not started after the failcount reset.  (But I am not sure and had not yet time to try to reproduce.)

Is there any deeper insight which might help with a sound understanding of this issue?

Kind regards
Andreas Cart
From: Klaus Wenninger [mailto:kwenning at redhat.com]
Sent: Mittwoch, 13. September 2017 13:33
To: Cluster Labs - All topics related to open-source clustering welcomed <users at clusterlabs.org>; CART Andreas <andreas.cart at sonorys.at>
Subject: Re: [ClusterLabs] How to avoid stopping ordered resources on cleanup?

On 09/13/2017 10:26 AM, CART Andreas wrote:
Hello

We have a basic 2 node active/passive cluster with Pacemaker (1.1.16 , pcs: 0.9.148) / CMAN (3.0.12.1) / Corosync (1.4.7) on RHEL 6.8.

On the occasion of testing the cluster we noticed that dependent resources are stopped when calling cleanup for a resource lower down the order chain.
But for the production system we have to avoid this by all means.

Actually  we have a master/slave resource and dependent on that and colocated with the master a clustered IP address.
Here is the relevant part of the cluster config:
Cluster Name: bam-cluster
Corosync Nodes:
bam1-backend bam2-backend
Pacemaker Nodes:
bam1-backend bam2-backend

Resources:
Master: mvno-100-master
  Meta Attrs: master-node-max=1 clone-max=2 notify=true master-max=1 clone-node-max=1
  Resource: mvno-100 (class=ocf provider=sonorys type=sba)
   Operations: start interval=0s timeout=120s (mvno-100-start-interval-0s)
               stop interval=0s timeout=120s (mvno-100-stop-interval-0s)
               promote interval=0s timeout=5s (mvno-100-promote-interval-0s)
               demote interval=0s timeout=5s (mvno-100-demote-interval-0s)
               monitor interval=9 role=Master (mvno-100-monitor-interval-9)
               monitor interval=10 role=Slave (mvno-100-monitor-interval-10)
Resource: IPaddrAdminToolMvno100 (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=xxx.xxx.xxx.xxx cidr_netmask=yy
  Operations: start interval=0s timeout=20s (IPaddrAdminToolMvno100-start-interval-0s)
              stop interval=0s timeout=20s (IPaddrAdminToolMvno100-stop-interval-0s)
              monitor interval=10s timeout=20s (IPaddrAdminToolMvno100-monitor-interval-10s)
Ordering Constraints:
  start IPaddrAdminToolMvno100 then promote mvno-100-master (kind:Mandatory) (id:order-IPaddrAdminToolMvno100-mvno-100-master-mandatory)
Colocation Constraints:
  IPaddrAdminToolMvno100 with mvno-100-master (score:INFINITY) (rsc-role:Started) (with-rsc-role:Master) (id:colocation-IPaddrAdminToolMvno100-mvno-100-master-INFINITY)
Cluster Properties:
cluster-infrastructure: cman
default-resource-stickiness: 100
no-quorum-policy: ignore
symmetric-cluster: true

Have you tried the other way round?
like: promote mvno-100 then start IPaddrAdminToolMvno100

Regards,
Klaus



For example we had the following failure notice:
Failed Actions:
* mvno-100_monitor_10000 on bam1-backend 'not running' (7): call=40, status=complete, exitreason='none',
    last-rc-change='Tue Sep 12 09:55:49 2017', queued=0ms, exec=0ms

And cleared it by calling:
# pcs resource cleanup mvno-100
Wed Sep 13 06:43:53 UTC 2017
Cleaning up mvno-100:0 on bam1-backend, removing fail-count-mvno-100
Cleaning up mvno-100:0 on bam2-backend, removing fail-count-mvno-100
Waiting for 2 replies from the CRMd.. OK

This resulted in stopping IPaddrAdminToolMvno100 and starting it again, which actually means a (short) interruption of service ... which we would like to avoid.

Here is the corresponding logfile for the cluster:
Sep 13 06:43:53 [3829] bam1-omc      attrd:   notice: attrd_cs_dispatch:        Update relayed from bam2-backend
Sep 13 06:43:53 [3829] bam1-omc      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: fail-count-mvno-100 (<null>)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.167.0 2
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.167.1 590cae41f6fad2b00a8cf067302756e4
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   -- /cib/status/node_state[@id='bam1-backend']/lrm[@id='bam1-backend']/lrm_resources/lrm_resource[@id='mvno-100']
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=1
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_delete operation for section //node_state[@uname='bam1-backend']//lrm_resource[@id='mvno-10
0']: OK (rc=0, origin=local/crmd/1485, version=0.167.0)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.167.0 2
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.167.1 (null)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   -- /cib/status/node_state[@id='bam2-backend']/lrm[@id='bam2-backend']/lrm_resources/lrm_resource[@id='mvno-100']
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=1
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_delete operation for section //node_state[@uname='bam2-backend']//lrm_resource[@id='mvno-10
0']: OK (rc=0, origin=bam2-backend/crmd/39, version=0.167.0)
Sep 13 06:43:53 [3831] bam1-omc       crmd:     info: delete_resource:  Removing resource mvno-100 for 1c0903b8-1dcc-432a-b6a8-e409ce917dda (root) on bam2-backend
Sep 13 06:43:53 [3828] bam1-omc       lrmd:     info: cancel_recurring_action:  Cancelling ocf operation mvno-100_monitor_9000
Sep 13 06:43:53 [3829] bam1-omc      attrd:   notice: attrd_perform_update:     Sent delete 57: node=bam1-backend, attr=fail-count-mvno-100, id=<n/a>, set=(null), section=status
Sep 13 06:43:53 [3831] bam1-omc       crmd:     info: lrm_remove_deleted_op:    Removing op mvno-100_monitor_9000:95 for deleted resource mvno-100
Sep 13 06:43:53 [3831] bam1-omc       crmd:     info: notify_deleted:   Notifying 1c0903b8-1dcc-432a-b6a8-e409ce917dda on bam2-backend that mvno-100 was deleted
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.167.0 2
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.167.1 (null)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   -- /cib/status/node_state[@id='bam1-backend']/transient_attributes[@id='bam1-backend']/instance_attributes[@id='s
tatus-bam1-backend']/nvpair[@id='status-bam1-backend-fail-count-mvno-100']
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=1
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_delete operation for section status: OK (rc=0, origin=local/attrd/57, version=0.167.1)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.167.1 2
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.167.2 (null)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   -- /cib/status/node_state[@id='bam1-backend']/lrm[@id='bam1-backend']/lrm_resources/lrm_resource[@id='mvno-100']
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=2
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_delete operation for section //node_state[@uname='bam1-backend']//lrm_resource[@id='mvno-10
0']: OK (rc=0, origin=local/crmd/1486, version=0.167.2)
Sep 13 06:43:53 [3831] bam1-omc       crmd:     info: abort_transition_graph:   Transition aborted by deletion of nvpair[@id='status-bam1-backend-fail-count-mvno-100']: Transient attrib
ute change | cib=0.167.1 source=abort_unless_down:345 path=/cib/status/node_state[@id='bam1-backend']/transient_attributes[@id='bam1-backend']/instance_attributes[@id='status-bam1-backe
nd']/nvpair[@id='status-bam1-backend-fail-count-mvno-100'] complete=true
Sep 13 06:43:53 [3831] bam1-omc       crmd:     info: abort_transition_graph:   Transition aborted by deletion of lrm_resource[@id='mvno-100']: Resource state removal | cib=0.167.2 sour
ce=abort_unless_down:345 path=/cib/status/node_state[@id='bam1-backend']/lrm[@id='bam1-backend']/lrm_resources/lrm_resource[@id='mvno-100'] complete=true
Sep 13 06:43:53 [3828] bam1-omc       lrmd:     info: process_lrmd_get_rsc_info:        Resource 'mvno-100' not found (3 active resources)
Sep 13 06:43:53 [3831] bam1-omc       crmd:     info: process_lrm_event:        Result of monitor operation for mvno-100 on bam1-backend: Cancelled | call=95 key=mvno-100_monitor_9000 c
onfirmed=true
Sep 13 06:43:53 [3831] bam1-omc       crmd:     info: update_history_cache:     Resource mvno-100 no longer exists, not updating cache
Sep 13 06:43:53 [3831] bam1-omc       crmd:   notice: do_state_transition:      State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition
_graph
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.167.2 2
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.168.0 (null)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @epoch=168, @num_updates=0
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-opti
ons-last-lrm-refresh']:  @value=1505285033
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/1488, version=0.168.0)
Sep 13 06:43:53 [3831] bam1-omc       crmd:     info: abort_transition_graph:   Transition aborted by cib-bootstrap-options-last-lrm-refresh doing modify last-lrm-refresh=1505285033: Configuration change | cib=0.168.0 source=te_update_diff:444 path=/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-last-lrm-refresh'] complete=true
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_file_backup:  Archived previous version as /var/lib/pacemaker/cib/cib-65.raw
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.168.0 2
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.168.1 (null)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   -- /cib/status/node_state[@id='bam2-backend']/lrm[@id='bam2-backend']/lrm_resources/lrm_resource[@id='mvno-100']
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=1
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_delete operation for section //node_state[@uname='bam2-backend']//lrm_resource[@id='mvno-100']: OK (rc=0, origin=bam2-backend/crmd/40, version=0.168.1)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_modify operation for section crm_config: OK (rc=0, origin=bam2-backend/crmd/42, version=0.168.1)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_file_write_with_digest:       Wrote version 0.168.0 of the CIB to disk (digest: 7d2e14eee5a440883e9659536e70260f)
Sep 13 06:43:53 [3826] bam1-omc        cib:     info: cib_file_write_with_digest:       Reading cluster configuration file /var/lib/pacemaker/cib/cib.BuFPbI (digest: /var/lib/pacemaker/cib/cib.jHzYAc)
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: parse_notifications:      No optional alerts section in cib
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: abort_transition_graph:   Transition aborted by deletion of lrm_resource[@id='mvno-100']: Resource state removal | cib=0.168.1 source=abort_unless_down:345 path=/cib/status/node_state[@id='bam2-backend']/lrm[@id='bam2-backend']/lrm_resources/lrm_resource[@id='mvno-100'] complete=true
Sep 13 06:43:54 [3830] bam1-omc    pengine:   notice: unpack_config:    On loss of CCM Quorum: Ignore
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: determine_online_status_fencing:  Node bam2-backend is active
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: determine_online_status:  Node bam2-backend is online
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: determine_online_status_fencing:  Node bam1-backend is active
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: determine_online_status:  Node bam1-backend is online
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: determine_op_status:      Operation monitor found resource IPaddrAdminToolMvno100 active on bam1-backend
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: clone_print:       Master/Slave Set: mvno-100-master [mvno-100]
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: short_print:           Stopped: [ bam1-backend bam2-backend ]
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: native_print:     IPaddrAdminToolMvno100  (ocf::heartbeat:IPaddr2):       Started bam1-backend
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: native_print:     IPMI_fence_bam1 (stonith:fence_ipmilan):        Started bam2-backend
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: native_print:     IPMI_fence_bam2 (stonith:fence_ipmilan):        Started bam1-backend
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: master_color:     mvno-100-master: Promoted 0 instances of a possible 1 to master
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: native_color:     Resource IPaddrAdminToolMvno100 cannot run anywhere
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: RecurringOp:       Start recurring monitor (10s) for mvno-100:0 on bam1-backend
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: RecurringOp:       Start recurring monitor (10s) for mvno-100:1 on bam2-backend
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: RecurringOp:       Start recurring monitor (10s) for mvno-100:0 on bam1-backend
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: RecurringOp:       Start recurring monitor (10s) for mvno-100:1 on bam2-backend
Sep 13 06:43:54 [3830] bam1-omc    pengine:   notice: LogActions:       Start   mvno-100:0      (bam1-backend)
Sep 13 06:43:54 [3830] bam1-omc    pengine:   notice: LogActions:       Start   mvno-100:1      (bam2-backend)
Sep 13 06:43:54 [3830] bam1-omc    pengine:   notice: LogActions:       Stop    IPaddrAdminToolMvno100  (bam1-backend)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I suspect the cluster "forgets" all status information about the cleaned up resource and therefore supposes it is in stopped state and has to be started.
And due to the order constraint the dependent IP address has to be stopped as long as the cluster does not see the first resource in state started.
Shouldn't there be a monitor operation to determine the actual state before performing any actions based on unconfirmed assumptions of state?
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: LogActions:       Leave   IPMI_fence_bam1 (Started bam2-backend)
Sep 13 06:43:54 [3830] bam1-omc    pengine:     info: LogActions:       Leave   IPMI_fence_bam2 (Started bam1-backend)
Sep 13 06:43:54 [3830] bam1-omc    pengine:   notice: process_pe_message:       Calculated transition 1379, saving inputs in /var/lib/pacemaker/pengine/pe-input-31.bz2
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: do_state_transition:      State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: do_te_invoke:     Processing graph 1379 (ref=pe_calc-dc-1505285034-1562) derived from /var/lib/pacemaker/pengine/pe-input-31.bz2
Sep 13 06:43:54 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating monitor operation mvno-100:0_monitor_0 locally on bam1-backend | action 5
Sep 13 06:43:54 [3828] bam1-omc       lrmd:     info: process_lrmd_get_rsc_info:        Resource 'mvno-100' not found (3 active resources)
Sep 13 06:43:54 [3828] bam1-omc       lrmd:     info: process_lrmd_get_rsc_info:        Resource 'mvno-100:0' not found (3 active resources)
Sep 13 06:43:54 [3828] bam1-omc       lrmd:     info: process_lrmd_rsc_register:        Added 'mvno-100' to the rsc list (4 active resources)
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: do_lrm_rsc_op:    Performing key=5:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c op=mvno-100_monitor_0
Sep 13 06:43:54 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating monitor operation mvno-100:1_monitor_0 on bam2-backend | action 6
Sep 13 06:43:54 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating stop operation IPaddrAdminToolMvno100_stop_0 locally on bam1-backend | action 35
Sep 13 06:43:54 [3828] bam1-omc       lrmd:     info: cancel_recurring_action:  Cancelling ocf operation IPaddrAdminToolMvno100_monitor_10000
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: do_lrm_rsc_op:    Performing key=35:1379:0:e2c19428-0707-4677-a89a-ff1c19ebe57c op=IPaddrAdminToolMvno100_stop_0
Sep 13 06:43:54 [3828] bam1-omc       lrmd:     info: log_execute:      executing - rsc:IPaddrAdminToolMvno100 action:stop call_id:104
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: process_lrm_event:        Result of monitor operation for IPaddrAdminToolMvno100 on bam1-backend: Cancelled | call=90 key=IPaddrAdminToolMvno100_monitor_10000 confirmed=true
sba(mvno-100)[56850]:   2017/09/13_06:43:54 INFO: mvno-100 monitor started
sba(mvno-100)[56850]:   2017/09/13_06:43:54 INFO: Check status
IPaddr2(IPaddrAdminToolMvno100)[56851]: 2017/09/13_06:43:54 INFO: IP status = ok, IP_CIP=
Sep 13 06:43:54 [3828] bam1-omc       lrmd:     info: log_finished:     finished - rsc:IPaddrAdminToolMvno100 action:stop call_id:104 pid:56851 exit-code:0 exec-time:72ms queue-time:0ms
Sep 13 06:43:54 [3831] bam1-omc       crmd:   notice: process_lrm_event:        Result of stop operation for IPaddrAdminToolMvno100 on bam1-backend: 0 (ok) | call=104 key=IPaddrAdminToolMvno100_stop_0 confirmed=true cib-update=1494
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.168.1 2
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.168.2 (null)
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=2
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib/status/node_state[@id='bam1-backend']/lrm[@id='bam1-backend']/lrm_resources/lrm_resource[@id='IPaddrAdminToolMvno100']/lrm_rsc_op[@id='IPaddrAdminToolMvno100_last_0']: @operation_key=IPaddrAdminToolMvno100_stop_0, @operation=stop, @transition-key=35:1379:0:e2c19428-0707-4677-a89a-ff1c19ebe57c, @transition-magic=0:0;35:1379:0:e2c19428-0707-4677-a89a-ff1c19ebe57c, @call-id=104, @last-run=1505285034, @last-rc-change=1505285034, @ex
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/1494, version=0.168.2)
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: match_graph_event:        Action IPaddrAdminToolMvno100_stop_0 (35) confirmed on bam1-backend (rc=0)
sba(mvno-100)[56850]:   2017/09/13_06:43:54 INFO: mvno-100 monitor returned 8
Sep 13 06:43:54 [3831] bam1-omc       crmd:   notice: process_lrm_event:        Result of probe operation for mvno-100 on bam1-backend: 8 (master) | call=102 key=mvno-100_monitor_0 confirmed=true cib-update=1495
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.168.2 2
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.168.3 (null)
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=3
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   ++ /cib/status/node_state[@id='bam1-backend']/lrm[@id='bam1-backend']/lrm_resources:  <lrm_resource id="mvno-100" type="sba" class="ocf" provider="sonorys"/>
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   ++                                                                                     <lrm_rsc_op id="mvno-100_last_failure_0" operation_key="mvno-100_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.11" transition-key="5:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c" transition-magic="0:8;5:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c" on_node="bam1-backend" call-id="102" rc-code="8" op-status="0"
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   ++                                                                                     <lrm_rsc_op id="mvno-100_last_0" operation_key="mvno-100_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.11" transition-key="5:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c" transition-magic="0:8;5:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c" on_node="bam1-backend" call-id="102" rc-code="8" op-status="0" interva
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_perform_op:   ++                                                                                    </lrm_resource>
Sep 13 06:43:54 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/1495, version=0.168.3)
Sep 13 06:43:54 [3831] bam1-omc       crmd:  warning: status_from_rc:   Action 5 (mvno-100:0_monitor_0) on bam1-backend failed (target: 7 vs. rc: 8): Error
Sep 13 06:43:54 [3831] bam1-omc       crmd:   notice: abort_transition_graph:   Transition aborted by operation mvno-100_monitor_0 'create' on bam1-backend: Event failed | magic=0:8;5:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c cib=0.168.3 source=match_graph_event:310 complete=false
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_monitor_0 (5) confirmed on bam1-backend (rc=8)
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: process_graph_event:      Detected action (1379.5) mvno-100_monitor_0.102=master: failed
Sep 13 06:43:54 [3831] bam1-omc       crmd:  warning: status_from_rc:   Action 5 (mvno-100:0_monitor_0) on bam1-backend failed (target: 7 vs. rc: 8): Error
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: abort_transition_graph:   Transition aborted by operation mvno-100_monitor_0 'create' on bam1-backend: Event failed | magic=0:8;5:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c cib=0.168.3 source=match_graph_event:310 complete=false
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_monitor_0 (5) confirmed on bam1-backend (rc=8)
Sep 13 06:43:54 [3831] bam1-omc       crmd:     info: process_graph_event:      Detected action (1379.5) mvno-100_monitor_0.102=master: failed
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.168.3 2
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.168.4 (null)
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=4
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   ++ /cib/status/node_state[@id='bam2-backend']/lrm[@id='bam2-backend']/lrm_resources:  <lrm_resource id="mvno-100" type="sba" class="ocf" provider="sonorys"/>
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   ++                                                                                     <lrm_rsc_op id="mvno-100_last_failure_0" operation_key="mvno-100_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.11" transition-key="6:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c" transition-magic="0:0;6:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c" on_node="bam2-backend" call-id="82" rc-code="0" op-status="0"
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   ++                                                                                     <lrm_rsc_op id="mvno-100_last_0" operation_key="mvno-100_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.11" transition-key="6:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c" transition-magic="0:0;6:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c" on_node="bam2-backend" call-id="82" rc-code="0" op-status="0" interval
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   ++                                                                                    </lrm_resource>
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_modify operation for section status: OK (rc=0, origin=bam2-backend/crmd/44, version=0.168.4)
Sep 13 06:43:55 [3831] bam1-omc       crmd:  warning: status_from_rc:   Action 6 (mvno-100:1_monitor_0) on bam2-backend failed (target: 7 vs. rc: 0): Error
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: abort_transition_graph:   Transition aborted by operation mvno-100_monitor_0 'create' on bam2-backend: Event failed | magic=0:0;6:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c cib=0.168.4 source=match_graph_event:310 complete=false
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_monitor_0 (6) confirmed on bam2-backend (rc=0)
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: process_graph_event:      Detected action (1379.6) mvno-100_monitor_0.82=ok: failed
Sep 13 06:43:55 [3831] bam1-omc       crmd:  warning: status_from_rc:   Action 6 (mvno-100:1_monitor_0) on bam2-backend failed (target: 7 vs. rc: 0): Error
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: abort_transition_graph:   Transition aborted by operation mvno-100_monitor_0 'create' on bam2-backend: Event failed | magic=0:0;6:1379:7:e2c19428-0707-4677-a89a-ff1c19ebe57c cib=0.168.4 source=match_graph_event:310 complete=false
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_monitor_0 (6) confirmed on bam2-backend (rc=0)
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: process_graph_event:      Detected action (1379.6) mvno-100_monitor_0.82=ok: failed
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: run_graph:        Transition 1379 (Complete=6, Pending=0, Fired=0, Skipped=0, Incomplete=10, Source=/var/lib/pacemaker/pengine/pe-input-31.bz2): Complete
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: do_state_transition:      State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd
Sep 13 06:43:55 [3830] bam1-omc    pengine:   notice: unpack_config:    On loss of CCM Quorum: Ignore
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: determine_online_status_fencing:  Node bam2-backend is active
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: determine_online_status:  Node bam2-backend is online
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: determine_online_status_fencing:  Node bam1-backend is active
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: determine_online_status:  Node bam1-backend is online
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: determine_op_status:      Operation monitor found resource mvno-100:0 active on bam2-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: determine_op_status:      Operation monitor found resource mvno-100:0 active on bam2-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: determine_op_status:      Operation monitor found resource IPaddrAdminToolMvno100 active on bam1-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: determine_op_status:      Operation monitor found resource mvno-100:1 active in master mode on bam1-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: determine_op_status:      Operation monitor found resource mvno-100:1 active in master mode on bam1-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: clone_print:       Master/Slave Set: mvno-100-master [mvno-100]
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: short_print:           Masters: [ bam1-backend ]
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: short_print:           Slaves: [ bam2-backend ]
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: native_print:     IPaddrAdminToolMvno100  (ocf::heartbeat:IPaddr2):       Stopped
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: native_print:     IPMI_fence_bam1 (stonith:fence_ipmilan):        Started bam2-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: native_print:     IPMI_fence_bam2 (stonith:fence_ipmilan):        Started bam1-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: master_color:     Promoting mvno-100:1 (Master bam1-backend)
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: master_color:     mvno-100-master: Promoted 1 instances of a possible 1 to master
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: RecurringOp:       Start recurring monitor (10s) for mvno-100:0 on bam2-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: RecurringOp:       Start recurring monitor (9s) for mvno-100:1 on bam1-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: RecurringOp:       Start recurring monitor (10s) for mvno-100:0 on bam2-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: RecurringOp:       Start recurring monitor (9s) for mvno-100:1 on bam1-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: RecurringOp:       Start recurring monitor (10s) for IPaddrAdminToolMvno100 on bam1-backend
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: LogActions:       Leave   mvno-100:0      (Slave bam2-backend)
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: LogActions:       Leave   mvno-100:1      (Master bam1-backend)
Sep 13 06:43:55 [3830] bam1-omc    pengine:   notice: LogActions:       Start   IPaddrAdminToolMvno100  (bam1-backend)
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: LogActions:       Leave   IPMI_fence_bam1 (Started bam2-backend)
Sep 13 06:43:55 [3830] bam1-omc    pengine:     info: LogActions:       Leave   IPMI_fence_bam2 (Started bam1-backend)
Sep 13 06:43:55 [3830] bam1-omc    pengine:   notice: process_pe_message:       Calculated transition 1380, saving inputs in /var/lib/pacemaker/pengine/pe-input-32.bz2
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: do_state_transition:      State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: do_te_invoke:     Processing graph 1380 (ref=pe_calc-dc-1505285035-1566) derived from /var/lib/pacemaker/pengine/pe-input-32.bz2
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating start operation IPaddrAdminToolMvno100_start_0 locally on bam1-backend | action 36
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: do_lrm_rsc_op:    Performing key=36:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c op=IPaddrAdminToolMvno100_start_0
Sep 13 06:43:55 [3828] bam1-omc       lrmd:     info: log_execute:      executing - rsc:IPaddrAdminToolMvno100 action:start call_id:105
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating notify operation mvno-100_pre_notify_promote_0 on bam2-backend | action 52
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating notify operation mvno-100_pre_notify_promote_0 locally on bam1-backend | action 54
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: do_lrm_rsc_op:    Performing key=54:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c op=mvno-100_notify_0
Sep 13 06:43:55 [3828] bam1-omc       lrmd:     info: log_execute:      executing - rsc:mvno-100 action:notify call_id:106
sba(mvno-100)[56964]:   2017/09/13_06:43:55 INFO: mvno-100 notify started
Sep 13 06:43:55 [3828] bam1-omc       lrmd:     info: log_finished:     finished - rsc:mvno-100 action:notify call_id:106 pid:56964 exit-code:3 exec-time:21ms queue-time:0ms
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_notify_0 (54) confirmed on bam1-backend (rc=0)
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: process_lrm_event:        Result of notify operation for mvno-100 on bam1-backend: 0 (ok) | call=106 key=mvno-100_notify_0 confirmed=true cib-update=0
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_notify_0 (52) confirmed on bam2-backend (rc=0)
IPaddr2(IPaddrAdminToolMvno100)[56963]: 2017/09/13_06:43:55 INFO: Adding inet address xxx.xxx.xxx.xxx/yy with broadcast address zzz.zzz.zzz.zzz to device bond2
IPaddr2(IPaddrAdminToolMvno100)[56963]: 2017/09/13_06:43:55 INFO: Bringing device bond2 up
IPaddr2(IPaddrAdminToolMvno100)[56963]: 2017/09/13_06:43:55 INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-xxx.xxx.xxx.xxx bond2 xxx.xxx.xxx.xxx auto not_used not_used
Sep 13 06:43:55 [3828] bam1-omc       lrmd:     info: log_finished:     finished - rsc:IPaddrAdminToolMvno100 action:start call_id:105 pid:56963 exit-code:0 exec-time:91ms queue-time:0ms
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: action_synced_wait:       Managed IPaddr2_meta-data_0 process 57044 exited with rc=0
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: process_lrm_event:        Result of start operation for IPaddrAdminToolMvno100 on bam1-backend: 0 (ok) | call=105 key=IPaddrAdminToolMvno100_start_0 confirmed=true cib-update=1497
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.168.4 2
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.168.5 (null)
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=5
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib/status/node_state[@id='bam1-backend']/lrm[@id='bam1-backend']/lrm_resources/lrm_resource[@id='IPaddrAdminToolMvno100']/lrm_rsc_op[@id='IPaddrAdminToolMvno100_last_0']: @operation_key=IPaddrAdminToolMvno100_start_0, @operation=start, @transition-key=36:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c, @transition-magic=0:0;36:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c, @call-id=105, @last-run=1505285035, @last-rc-change=1505285035, @
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/1497, version=0.168.5)
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action IPaddrAdminToolMvno100_start_0 (36) confirmed on bam1-backend (rc=0)
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating monitor operation IPaddrAdminToolMvno100_monitor_10000 locally on bam1-backend | action 37
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: do_lrm_rsc_op:    Performing key=37:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c op=IPaddrAdminToolMvno100_monitor_10000
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating promote operation mvno-100_promote_0 locally on bam1-backend | action 10
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: do_lrm_rsc_op:    Performing key=10:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c op=mvno-100_promote_0
Sep 13 06:43:55 [3828] bam1-omc       lrmd:     info: log_execute:      executing - rsc:mvno-100 action:promote call_id:108
sba(mvno-100)[57049]:   2017/09/13_06:43:55 INFO: mvno-100 promote started
sba(mvno-100)[57049]:   2017/09/13_06:43:55 INFO: Check status
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: process_lrm_event:        Result of monitor operation for IPaddrAdminToolMvno100 on bam1-backend: 0 (ok) | call=107 key=IPaddrAdminToolMvno100_monitor_10000 confirmed=false cib-update=1498
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.168.5 2
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.168.6 (null)
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=6
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib/status/node_state[@id='bam1-backend']/lrm[@id='bam1-backend']/lrm_resources/lrm_resource[@id='IPaddrAdminToolMvno100']/lrm_rsc_op[@id='IPaddrAdminToolMvno100_monitor_10000']: @transition-key=37:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c, @transition-magic=0:0;37:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c, @call-id=107, @last-rc-change=1505285035, @exec-time=65
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/1498, version=0.168.6)
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action IPaddrAdminToolMvno100_monitor_10000 (37) confirmed on bam1-backend (rc=0)
sba(mvno-100)[57049]:   2017/09/13_06:43:55 INFO: Resource is already running as Master
sba(mvno-100)[57049]:   2017/09/13_06:43:55 INFO: mvno-100 promote returned 0
Sep 13 06:43:55 [3828] bam1-omc       lrmd:     info: log_finished:     finished - rsc:mvno-100 action:promote call_id:108 pid:57049 exit-code:0 exec-time:287ms queue-time:0ms
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: process_lrm_event:        Result of promote operation for mvno-100 on bam1-backend: 0 (ok) | call=108 key=mvno-100_promote_0 confirmed=true cib-update=1499
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.168.6 2
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.168.7 (null)
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=7
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib/status/node_state[@id='bam1-backend']/lrm[@id='bam1-backend']/lrm_resources/lrm_resource[@id='mvno-100']/lrm_rsc_op[@id='mvno-100_last_0']: @operation_key=mvno-100_promote_0, @operation=promote, @transition-key=10:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c, @transition-magic=0:0;10:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c, @call-id=108, @rc-code=0, @last-run=1505285035, @last-rc-change=1505285035, @exec-time=287
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/1499, version=0.168.7)
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_promote_0 (10) confirmed on bam1-backend (rc=0)
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating notify operation mvno-100_post_notify_promote_0 on bam2-backend | action 53
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating notify operation mvno-100_post_notify_promote_0 locally on bam1-backend | action 55
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: do_lrm_rsc_op:    Performing key=55:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c op=mvno-100_notify_0
Sep 13 06:43:55 [3828] bam1-omc       lrmd:     info: log_execute:      executing - rsc:mvno-100 action:notify call_id:109
sba(mvno-100)[57157]:   2017/09/13_06:43:55 INFO: mvno-100 notify started
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_notify_0 (53) confirmed on bam2-backend (rc=0)
Sep 13 06:43:55 [3828] bam1-omc       lrmd:     info: log_finished:     finished - rsc:mvno-100 action:notify call_id:109 pid:57157 exit-code:3 exec-time:25ms queue-time:0ms
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_notify_0 (55) confirmed on bam1-backend (rc=0)
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: process_lrm_event:        Result of notify operation for mvno-100 on bam1-backend: 0 (ok) | call=109 key=mvno-100_notify_0 confirmed=true cib-update=0
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating monitor operation mvno-100_monitor_10000 on bam2-backend | action 6
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: te_rsc_command:   Initiating monitor operation mvno-100_monitor_9000 locally on bam1-backend | action 11
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: do_lrm_rsc_op:    Performing key=11:1380:8:e2c19428-0707-4677-a89a-ff1c19ebe57c op=mvno-100_monitor_9000
sba(mvno-100)[57166]:   2017/09/13_06:43:55 INFO: mvno-100 monitor started
sba(mvno-100)[57166]:   2017/09/13_06:43:55 INFO: Check status
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.168.7 2
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.168.8 (null)
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=8
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   ++ /cib/status/node_state[@id='bam2-backend']/lrm[@id='bam2-backend']/lrm_resources/lrm_resource[@id='mvno-100']: <lrm_rsc_op id="mvno-100_monitor_10000" operation_key="mvno-100_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.11" transition-key="6:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c" transition-magic="0:0;6:1380:0:e2c19428-0707-4677-a89a-ff1c19ebe57c" on_node="bam2-backend" call-id="
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_modify operation for section status: OK (rc=0, origin=bam2-backend/crmd/45, version=0.168.8)
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_monitor_10000 (6) confirmed on bam2-backend (rc=0)
sba(mvno-100)[57166]:   2017/09/13_06:43:55 INFO: mvno-100 monitor returned 8
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: process_lrm_event:        Result of monitor operation for mvno-100 on bam1-backend: 8 (master) | call=110 key=mvno-100_monitor_9000 confirmed=false cib-update=1500
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: --- 0.168.8 2
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   Diff: +++ 0.168.9 (null)
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   +  /cib:  @num_updates=9
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_perform_op:   ++ /cib/status/node_state[@id='bam1-backend']/lrm[@id='bam1-backend']/lrm_resources/lrm_resource[@id='mvno-100']: <lrm_rsc_op id="mvno-100_monitor_9000" operation_key="mvno-100_monitor_9000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.11" transition-key="11:1380:8:e2c19428-0707-4677-a89a-ff1c19ebe57c" transition-magic="0:8;11:1380:8:e2c19428-0707-4677-a89a-ff1c19ebe57c" on_node="bam1-backend" call-id="
Sep 13 06:43:55 [3826] bam1-omc        cib:     info: cib_process_request:      Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/1500, version=0.168.9)
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: match_graph_event:        Action mvno-100_monitor_9000 (11) confirmed on bam1-backend (rc=8)
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: run_graph:        Transition 1380 (Complete=15, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-32.bz2): Complete
Sep 13 06:43:55 [3831] bam1-omc       crmd:     info: do_log:   Input I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd
Sep 13 06:43:55 [3831] bam1-omc       crmd:   notice: do_state_transition:      State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd

How can we avoid this interruption of service on cleanup?

Kind regards
Andreas Cart





_______________________________________________

Users mailing list: Users at clusterlabs.org<mailto:Users at clusterlabs.org>

http://lists.clusterlabs.org/mailman/listinfo/users



Project Home: http://www.clusterlabs.org

Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf

Bugs: http://bugs.clusterlabs.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20170915/8e5181fc/attachment-0003.html>


More information about the Users mailing list