[ClusterLabs] active/active OCFS2 interruption after bootup of failed node

Ken Gaillot kgaillot at redhat.com
Mon May 11 15:39:44 CEST 2015



----- Original Message -----
> Found this post related to this issue:
> http://www.gossamer-threads.com/lists/linuxha/pacemaker/78627
> Unfortunately I cannot find any information about this issue and it's
> important to finalize my testing...
> 
> Why are all resources restarted, is it because of the order constraint
> and is this to be expected?
> 
> Any suggestions are highly appreciated!

Comments inline ...

> On 2015-05-08 17:39, lukas wrote:
> > Hi,
> >
> > have an active/active cluster with VIP/OCFS2/APACHE. Everything works
> > fine if one node is turned off - smooth switch of both IP's to active
> > node. The problem occurs as soon as the failed node boots up again.
> > Both clone (node1+node2) instances of all resource restart. So there
> > is still a short interruption. Why are the instances on node1
> > restarted as well?
> >
> > Any suggestions how to prevent that?
> >
> > node vm-1 \
> >         attributes standby="off"
> > node vm-2 \
> >         attributes standby="off"
> > primitive WebFS ocf:heartbeat:Filesystem \
> >         params device="/dev/drbd/by-res/wwwdata"
> > directory="/var/www/html" fstype="ocfs2" \
> >         op monitor interval="10"
> > primitive p_IP ocf:heartbeat:IPaddr2 \
> >         params ip="192.168.122.200" nic="eth0"
> > clusterip_hash="sourceip-sourceport" \
> >         op monitor interval="10s"
> > primitive p_apache ocf:heartbeat:apache \
> >         params configfile="/etc/apache2/apache2.conf"
> > statusurl="http://localhost/server-status" \
> >         op monitor interval="60" timeout="20" \
> >         op start interval="0" timeout="40s" start-delay="0" \
> >         meta is-managed="true"
> > primitive p_controld_dlm ocf:pacemaker:controld \
> >         op monitor interval="120"
> > primitive p_drbd_wwwdata ocf:linbit:drbd \
> >         params drbd_resource="wwwdata
> >         op monitor interval="60s"
> > primitive
> >         params hostlist="vm-1:N1"
> > hypervisor_uri="qemu+tcp://192.168.122.1/system"
> > pcmk_reboot_action="reset" \
> >         op monitor interval="60
> >         meta target-role="Started"
> > primitive p_fence_N2 stonith:external/libvirt \
> >         params hostlist="vm-2:N2"
> > hypervisor_uri="qemu+tcp://192.168.122.1/system"
> > pcmk_reboot_action="reset
> >         op monitor interval="60"
> > primitive
> >         op start interval="0" timeout="90" \
> >         op stop interval="0" timeout="100
> >         op monitor interval="10"
> > ms ms_drbd_ocfs2 p_drbd_wwwdata \
> >         meta master-max="2" clone-max="2" notify="true"
> > clone cl_DLM p_controld_dlm \
> >         meta globally-unique="false" interleave="true"
> > clone cl_IP p_IP \
> >         meta globally-unique="true" clone-max="2" clone-node-max="2"
> > target-role="Started" interleave="true" \
> >         params resource-stickiness="0"

You have a default stickiness of 200 but you set it to 0 for cl_IP. I'm guessing that's because you want the two instances of it to be balanced whenever possible. The cluster will honor that preference.

> > clone cl_WebFS WebFS \
> >         meta interleave="true" target-role="Started"
> > clone cl_Website p_apache
> > clone cl_o2cb p_o2cb \
> >         meta globally-unique="false" interleave="true"
> > target-role="Started"

I notice you have interleave=true for everything except ms_drbd_ocfs2 and cl_Website. This is a good explanation to help decide what you want it to be:
https://www.hastexo.com/resources/hints-and-kinks/interleaving-pacemaker-clones

> > location l_fence_N1 p_fence_N1 -inf: vm-1
> > location l_fence_N2 p_fence_N2 -inf: vm-2
> > colocation c_fs_ip inf: cl_Website cl_WebFS
> > colocation c_o2cb_dlm inf: cl_o2cb cl_DLM
> > colocation c_o2cb_fs inf: cl_WebFS cl_o2cb
> > order o_ocfs2 inf: ms_drbd_ocfs2:promote cl_DLM:start cl_o2cb:start
> > cl_WebFS:start cl_Website cl_IP
> > property $id="cib-bootstrap-options" \
> > dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
> >         cluster-infrastructure="openais" \
> >         expected-quorum-votes="2" \
> >         stonith-enabled="true" \
> >         no-quorum-policy="ignore" \
> >         last-lrm-refresh="1431076127"
> > rsc_defaults $id="rsc-options" \
> >         resource-stickiness="200"
> > op_defaults $id="op-options" \
> >         timeout="240s"
> >
> >
> > root at vm-1:~# tail -f /var/log/syslog
> > May  8 17:22:07 vm-1 IPaddr2[11583]: INFO: /usr/lib/heartbeat/send_arp
> > -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.122.200 eth0
> > 192.168.122.200 8bbd5f0b558f not_used not_used
> > May  8 17:22:07 vm-1 lrmd: [3008]: info: operation start[72] on p_IP:0
> > for client 3011: pid 11583 exited with return code 0
> > May  8 17:22:07 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_IP:0_start_0 (call=72, rc=0, cib-update=211,
> > confirmed=true) ok
> > May  8 17:22:07 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 80: monitor p_IP:0_monitor_10000 on vm-1 (local)
> > May  8 17:22:07 vm-1 lrmd: [3008]: info: rsc:p_IP:0 monitor[73] (pid
> > 11684)
> > May  8 17:22:07 vm-1 lrmd: [3008]: info: operation monitor[73] on
> > p_IP:0 for client 3011: pid 11684 exited with return code 0
> > May  8 17:22:07 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_IP:0_monitor_10000 (call=73, rc=0, cib-update=212,
> > confirmed=false) ok
> > May  8 17:22:08 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 82: monitor p_IP:1_monitor_10000 on vm-2
> > May  8 17:22:08 vm-1 crmd: [3011]: notice: run_graph: ==== Transition
> > 14 (Complete=25, Pending=0, Fired=0, Skipped=0, Incomplete=0,
> > Source=/var/lib/pengine/pe-input-2179.bz2): Complete
> > May  8 17:22:08 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
> > cause=C_FSA_INTERNAL origin=notify_crmd ]
> > May  8 17:22:27 vm-1 kernel: [  534.940064] block drbd1: PingAck did
> > not arrive in time.
> > May  8 17:22:27 vm-1 kernel: [  534.940238] block drbd1: peer( Primary
> > -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate ->
> > DUnknown )
> > May  8 17:22:27 vm-1 kernel: [  534.940290] block drbd1: new current
> > UUID 1D8582A875D72EB5:098FF992CBAA32F1:248E28A17C6EBF3D:248D28A17C6EBF3D
> > May  8 17:22:27 vm-1 kernel: [  534.942650] block drbd1: asender
> > terminated
> > May  8 17:22:27 vm-1 kernel: [  534.942654] block drbd1: Terminating
> > drbd1_asender
> > May  8 17:22:27 vm-1 kernel: [  534.943633] block drbd1: Connection
> > closed
> > May  8 17:22:27 vm-1 kernel: [  534.943639] block drbd1: conn(
> > NetworkFailure -> Unconnected )
> > May  8 17:22:27 vm-1 kernel: [  534.943643] block drbd1: receiver
> > terminated
> > May  8 17:22:27 vm-1 kernel: [  534.943645] block drbd1: Restarting
> > drbd1_receiver
> > May  8 17:22:27 vm-1 kernel: [  534.943647] block drbd1: receiver
> > (re)started
> > May  8 17:22:27 vm-1 kernel: [  534.943651] block drbd1: conn(
> > Unconnected -> WFConnection )
> > May  8 17:22:28 vm-1 corosync[2971]:   [TOTEM ] A processor failed,
> > forming new configuration.
> > May  8 17:22:32 vm-1 corosync[2971]:   [pcmk  ] notice:
> > pcmk_peer_update: Transitional membership event on ring 7416: memb=1,
> > new=0, lost=1
> > May  8 17:22:32 vm-1 corosync[2971]:   [pcmk  ] info:
> > pcmk_peer_update: memb: vm-1 2138745024
> > May  8 17:22:32 vm-1 corosync[2971]:   [pcmk  ] info:
> > pcmk_peer_update: lost: vm-2 930785472
> > May  8 17:22:32 vm-1 corosync[2971]:   [pcmk  ] notice:
> > pcmk_peer_update: Stable membership event on ring 7416: memb=1, new=0,
> > lost=0
> > May  8 17:22:32 vm-1 corosync[2971]:   [pcmk  ] info:
> > pcmk_peer_update: MEMB: vm-1 2138745024
> > May  8 17:22:32 vm-1 corosync[2971]:   [pcmk  ] info:
> > ais_mark_unseen_peer_dead: Node vm-2 was not seen in the previous
> > transition
> > May  8 17:22:32 vm-1 kernel: [  540.163987] dlm: closing connection to
> > node 930785472
> > May  8 17:22:32 vm-1 corosync[2971]:   [pcmk  ] info: update_member:
> > Node 930785472/vm-2 is now: lost
> > May  8 17:22:32 vm-1 corosync[2971]:   [pcmk  ] info:
> > send_member_notification: Sending membership update 7416 to 4 children
> > May  8 17:22:32 vm-1 corosync[2971]:   [TOTEM ] A processor joined or
> > left the membership and a new membership was formed.
> > May  8 17:22:32 vm-1 crmd: [3011]: notice: ais_dispatch_message:
> > Membership 7416: quorum lost
> > May  8 17:22:32 vm-1 crmd: [3011]: info: ais_status_callback: status:
> > vm-2 is now lost (was member)
> > May  8 17:22:32 vm-1 crmd: [3011]: info: crm_update_peer: Node vm-2:
> > id=930785472 state=lost (new) addr=r(0) ip(192.168.122.183) votes=1
> > born=7412 seen=7412 proc=00000000000000000000000000111312
> > May  8 17:22:32 vm-1 ocfs2_controld: [3654]: notice:
> > ais_dispatch_message: Membership 7416: quorum lost
> > May  8 17:22:32 vm-1 cluster-dlm: [3605]: notice:
> > ais_dispatch_message: Membership 7416: quorum lost
> > May  8 17:22:32 vm-1 cib: [3006]: notice: ais_dispatch_message:
> > Membership 7416: quorum lost
> > May  8 17:22:32 vm-1 corosync[2971]:   [CPG   ] chosen downlist:
> > sender r(0) ip(192.168.122.127) ; members(old:2 left:1)
> > May  8 17:22:32 vm-1 ocfs2_controld: [3654]: info: crm_update_peer:
> > Node vm-2: id=930785472 state=lost (new) addr=r(0)
> > ip(192.168.122.183)  votes=1 born=7412 seen=7412
> > proc=00000000000000000000000000111312
> > May  8 17:22:32 vm-1 cluster-dlm: [3605]: info: crm_update_peer: Node
> > vm-2: id=930785472 state=lost (new) addr=r(0) ip(192.168.122.183)
> > votes=1 born=7412 seen=7412 proc=00000000000000000000000000111312
> > May  8 17:22:32 vm-1 cib: [3006]: info: crm_update_peer: Node vm-2:
> > id=930785472 state=lost (new) addr=r(0) ip(192.168.122.183) votes=1
> > born=7412 seen=7412 proc=00000000000000000000000000111312
> > May  8 17:22:32 vm-1 corosync[2971]:   [MAIN  ] Completed service
> > synchronization, ready to provide service.
> > May  8 17:22:32 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section nodes (origin=local/crmd/213,
> > version=0.593.601): ok (rc=0)
> > May  8 17:22:32 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section cib (origin=local/crmd/215,
> > version=0.593.603): ok (rc=0)
> > May  8 17:22:32 vm-1 crmd: [3011]: info: crmd_ais_dispatch: Setting
> > expected votes to 2
> > May  8 17:22:32 vm-1 crmd: [3011]: WARN: match_down_event: No match
> > for shutdown action on vm-2
> > May  8 17:22:32 vm-1 crmd: [3011]: info: te_update_diff:
> > Stonith/shutdown of vm-2 not matched
> > May  8 17:22:32 vm-1 crmd: [3011]: info: abort_transition_graph:
> > te_update_diff:234 - Triggered transition abort (complete=1,
> > tag=node_state, id=vm-2, magic=NA, cib=0.593.602) : Node failure
> > May  8 17:22:32 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC
> > cause=C_FSA_INTERNAL origin=abort_transition_graph ]
> > May  8 17:22:32 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section crm_config (origin=local/crmd/217,
> > version=0.593.604): ok (rc=0)
> > May  8 17:22:32 vm-1 pengine: [3010]: notice: unpack_config: On loss
> > of CCM Quorum: Ignore
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: pe_fence_node: Node vm-2
> > will be fenced because it is un-expectedly down
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: determine_online_status:
> > Node vm-2 is unclean
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Action
> > p_fence_N1_stop_0 on vm-2 is unrunnable (offline)
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Marking
> > node vm-2 unclean
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Action
> > p_drbd_wwwdata:1_demote_0 on vm-2 is unrunnable (offline)
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Action
> > p_drbd_wwwdata:1_stop_0 on vm-2 is unrunnable (offline)
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Marking
> > node vm-2 unclean
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Action
> > p_drbd_wwwdata:1_demote_0 on vm-2 is unrunnable (offline)
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Action
> > p_drbd_wwwdata:1_stop_0 on vm-2 is unrunnable (offline)
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Marking
> > node vm-2 unclean
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Action
> > p_controld_dlm:0_stop_0 on vm-2 is unrunnable (offline)
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Marking
> > node vm-2 unclean
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Action
> > p_o2cb:0_stop_0 on vm-2 is unrunnable (offline)
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Marking
> > node vm-2 unclean
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Action
> > WebFS:0_stop_0 on vm-2 is unrunnable (offline)
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Marking
> > node vm-2 unclean
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Action
> > p_apache:1_stop_0 on vm-2 is unrunnable (offline)
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Marking
> > node vm-2 unclean
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Action
> > p_IP:1_stop_0 on vm-2 is unrunnable (offline)
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: custom_action: Marking
> > node vm-2 unclean
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: stage6: Scheduling Node
> > vm-2 for STONITH
> > May  8 17:22:32 vm-1 pengine: [3010]: notice: LogActions: Stop
> > p_fence_N1#011(vm-2)
> > May  8 17:22:32 vm-1 pengine: [3010]: notice: LogActions: Demote
> > p_drbd_wwwdata:1#011(Master -> Stopped vm-2)
> > May  8 17:22:32 vm-1 pengine: [3010]: notice: LogActions: Stop
> > p_controld_dlm:0#011(vm-2)
> > May  8 17:22:32 vm-1 pengine: [3010]: notice: LogActions: Stop
> > p_o2cb:0#011(vm-2)
> > May  8 17:22:32 vm-1 pengine: [3010]: notice: LogActions: Stop
> > WebFS:0#011(vm-2)
> > May  8 17:22:32 vm-1 pengine: [3010]: notice: LogActions: Stop
> > p_apache:1#011(vm-2)
> > May  8 17:22:32 vm-1 pengine: [3010]: notice: LogActions: Move
> > p_IP:1#011(Started vm-2 -> vm-1)
> > May  8 17:22:32 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS
> > cause=C_IPC_MESSAGE origin=handle_response ]
> > May  8 17:22:32 vm-1 crmd: [3011]: info: do_te_invoke: Processing
> > graph 15 (ref=pe_calc-dc-1431098552-227) derived from
> > /var/lib/pengine/pe-warn-694.bz2
> > May  8 17:22:32 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 99: notify p_drbd_wwwdata:0_pre_notify_demote_0 on vm-1 (local)
> > May  8 17:22:32 vm-1 lrmd: [3008]: info: rsc:p_drbd_wwwdata:0
> > notify[74] (pid 11972)
> > May  8 17:22:32 vm-1 crmd: [3011]: notice: te_fence_node: Executing
> > reboot fencing operation (82) on vm-2 (timeout=60000)
> > May  8 17:22:32 vm-1 stonith-ng: [3007]: info:
> > initiate_remote_stonith_op: Initiating remote operation reboot for
> > vm-2: 6b9a869c-b8c3-43fe-b8d0-362717852644
> > May  8 17:22:32 vm-1 stonith-ng: [3007]: info:
> > can_fence_host_with_device: p_fence_N2 can fence vm-2: dynamic-list
> > May  8 17:22:32 vm-1 stonith-ng: [3007]: info: call_remote_stonith:
> > Requesting that vm-1 perform op reboot vm-2
> > May  8 17:22:32 vm-1 stonith-ng: [3007]: info:
> > can_fence_host_with_device: p_fence_N2 can fence vm-2: dynamic-list
> > May  8 17:22:32 vm-1 stonith-ng: [3007]: info: stonith_fence: Found 1
> > matching devices for 'vm-2'
> > May  8 17:22:32 vm-1 stonith-ng: [3007]: info: stonith_command:
> > Processed st_fence from vm-1: rc=-1
> > May  8 17:22:32 vm-1 stonith-ng: [3007]: info: make_args: Substituting
> > action 'reset' for requested operation 'reboot'
> > May  8 17:22:32 vm-1 pengine: [3010]: WARN: process_pe_message:
> > Transition 15: WARNINGs found during PE processing. PEngine Input
> > stored in: /var/lib/pengine/pe-warn-694.bz2
> > May  8 17:22:32 vm-1 pengine: [3010]: notice: process_pe_message:
> > Configuration WARNINGs found during PE processing.  Please run
> > "crm_verify -L" to identify issues.
> > May  8 17:22:32 vm-1 lrmd: [3008]: info: operation notify[74] on
> > p_drbd_wwwdata:0 for client 3011: pid 11972 exited with return code 0
> > May  8 17:22:32 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_drbd_wwwdata:0_notify_0 (call=74, rc=0, cib-update=0,
> > confirmed=true) ok
> > May  8 17:22:32 vm-1 external/libvirt[11993]: [12016]: notice: Domain
> > N2 was stopped
> > May  8 17:22:35 vm-1 external/libvirt[11993]: [12025]: notice: Domain
> > N2 was started
> > May  8 17:22:36 vm-1 stonith-ng: [3007]: notice: log_operation:
> > Operation 'reboot' [11973] (call 0 from
> > 97c57579-f851-458e-8256-6c87e2308265) for host 'vm-2' with device
> > 'p_fence_N2' returned: 0
> > May  8 17:22:36 vm-1 stonith-ng: [3007]: info: log_operation:
> > p_fence_N2: Performing: stonith -t external/libvirt -T reset vm-2
> > May  8 17:22:36 vm-1 stonith-ng: [3007]: info: log_operation:
> > p_fence_N2: success: vm-2 0
> > May  8 17:22:36 vm-1 stonith-ng: [3007]: notice: remote_op_done:
> > Operation reboot of vm-2 by vm-1 for
> > vm-1[97c57579-f851-458e-8256-6c87e2308265]: OK
> > May  8 17:22:36 vm-1 crmd: [3011]: info: tengine_stonith_callback:
> > StonithOp <st-reply st_origin="stonith_construct_async_reply"
> > t="stonith-ng" st_op="reboot"
> > st_remote_op="6b9a869c-b8c3-43fe-b8d0-362717852644"
> > st_clientid="97c57579-f851-458e-8256-6c87e2308265" st_target="vm-2"
> > st_device_action="st_fence" st_callid="0" st_callopt="0" st_rc="0"
> > st_output="Performing: stonith -t external/libvirt -T reset
> > vm-2#012success: vm-2 0#012" src="vm-1" seq="16" state="2" />
> > May  8 17:22:36 vm-1 crmd: [3011]: info: erase_status_tag: Deleting
> > xpath: //node_state[@uname='vm-2']/lrm
> > May  8 17:22:36 vm-1 crmd: [3011]: info: erase_status_tag: Deleting
> > xpath: //node_state[@uname='vm-2']/transient_attributes
> > May  8 17:22:36 vm-1 crmd: [3011]: notice: crmd_peer_update: Status
> > update: Client vm-2/crmd now has status [offline] (DC=true)
> > May  8 17:22:36 vm-1 crmd: [3011]: notice: tengine_stonith_notify:
> > Peer vm-2 was terminated (reboot) by vm-1 for vm-1: OK
> > (ref=6b9a869c-b8c3-43fe-b8d0-362717852644)
> > May  8 17:22:36 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_TRANSITION_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN
> > cause=C_FSA_INTERNAL origin=check_join_state ]
> > May  8 17:22:36 vm-1 crmd: [3011]: info: abort_transition_graph:
> > do_te_invoke:169 - Triggered transition abort (complete=0) : Peer Halt
> > May  8 17:22:36 vm-1 crmd: [3011]: notice: run_graph: ==== Transition
> > 15 (Complete=7, Pending=0, Fired=0, Skipped=22, Incomplete=15,
> > Source=/var/lib/pengine/pe-warn-694.bz2): Stopped
> > May  8 17:22:36 vm-1 crmd: [3011]: info: abort_transition_graph:
> > do_te_invoke:169 - Triggered transition abort (complete=1) : Peer Halt
> > May  8 17:22:36 vm-1 crmd: [3011]: info: join_make_offer: Making join
> > offers based on membership 7416
> > May  8 17:22:36 vm-1 crmd: [3011]: info: do_dc_join_offer_all:
> > join-10: Waiting on 1 outstanding join acks
> > May  8 17:22:36 vm-1 crmd: [3011]: info: update_dc: Set DC to vm-1
> > (3.0.6)
> > May  8 17:22:36 vm-1 crmd: [3011]: info: cib_fencing_updated: Fencing
> > update 219 for vm-2: complete
> > May  8 17:22:36 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_delete for section //node_state[@uname='vm-2']/lrm
> > (origin=local/crmd/220, version=0.593.606): ok (rc=0)
> > May  8 17:22:36 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_delete for section
> > //node_state[@uname='vm-2']/transient_attributes
> > (origin=local/crmd/221, version=0.593.607): ok (rc=0)
> > May  8 17:22:36 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED
> > cause=C_FSA_INTERNAL origin=check_join_state ]
> > May  8 17:22:36 vm-1 crmd: [3011]: info: do_dc_join_finalize: join-10:
> > Syncing the CIB from vm-1 to the rest of the cluster
> > May  8 17:22:36 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_sync for section 'all' (origin=local/crmd/224,
> > version=0.593.608): ok (rc=0)
> > May  8 17:22:36 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section nodes (origin=local/crmd/225,
> > version=0.593.609): ok (rc=0)
> > May  8 17:22:36 vm-1 crmd: [3011]: info: do_dc_join_ack: join-10:
> > Updating node state to member for vm-1
> > May  8 17:22:36 vm-1 crmd: [3011]: info: erase_status_tag: Deleting
> > xpath: //node_state[@uname='vm-1']/lrm
> > May  8 17:22:36 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_delete for section //node_state[@uname='vm-1']/lrm
> > (origin=local/crmd/226, version=0.593.610): ok (rc=0)
> > May  8 17:22:36 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED
> > cause=C_FSA_INTERNAL origin=check_join_state ]
> > May  8 17:22:36 vm-1 crmd: [3011]: info: abort_transition_graph:
> > do_te_invoke:162 - Triggered transition abort (complete=1) : Peer
> > Cancelled
> > May  8 17:22:36 vm-1 attrd: [3009]: notice: attrd_local_callback:
> > Sending full refresh (origin=crmd)
> > May  8 17:22:36 vm-1 attrd: [3009]: notice: attrd_trigger_update:
> > Sending flush op to all hosts for: master-p_drbd_wwwdata:0 (10000)
> > May  8 17:22:36 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section nodes (origin=local/crmd/228,
> > version=0.593.612): ok (rc=0)
> > May  8 17:22:36 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section cib (origin=local/crmd/230,
> > version=0.593.614): ok (rc=0)
> > May  8 17:22:36 vm-1 attrd: [3009]: notice: attrd_trigger_update:
> > Sending flush op to all hosts for: probe_complete (true)
> > May  8 17:22:36 vm-1 pengine: [3010]: notice: unpack_config: On loss
> > of CCM Quorum: Ignore
> > May  8 17:22:36 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_IP:1#011(vm-1)
> > May  8 17:22:36 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS
> > cause=C_IPC_MESSAGE origin=handle_response ]
> > May  8 17:22:36 vm-1 crmd: [3011]: info: do_te_invoke: Processing
> > graph 16 (ref=pe_calc-dc-1431098556-234) derived from
> > /var/lib/pengine/pe-input-2180.bz2
> > May  8 17:22:36 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 66: start p_IP:1_start_0 on vm-1 (local)
> > May  8 17:22:36 vm-1 lrmd: [3008]: info: rsc:p_IP:1 start[75] (pid 12069)
> > May  8 17:22:36 vm-1 pengine: [3010]: notice: process_pe_message:
> > Transition 16: PEngine Input stored in:
> > /var/lib/pengine/pe-input-2180.bz2
> > May  8 17:22:36 vm-1 kernel: [  544.194603] ocfs2: Begin replay
> > journal (node 930785472, slot 0) on device (147,1)
> > May  8 17:22:36 vm-1 IPaddr2[12069]: INFO: /usr/lib/heartbeat/send_arp
> > -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.122.200 eth0
> > 192.168.122.200 8bbd5f0b558f not_used not_used
> > May  8 17:22:36 vm-1 lrmd: [3008]: info: operation start[75] on p_IP:1
> > for client 3011: pid 12069 exited with return code 0
> > May  8 17:22:36 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_IP:1_start_0 (call=75, rc=0, cib-update=232,
> > confirmed=true) ok
> > May  8 17:22:36 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 67: monitor p_IP:1_monitor_10000 on vm-1 (local)
> > May  8 17:22:36 vm-1 lrmd: [3008]: info: rsc:p_IP:1 monitor[76] (pid
> > 12124)
> > May  8 17:22:36 vm-1 lrmd: [3008]: info: operation monitor[76] on
> > p_IP:1 for client 3011: pid 12124 exited with return code 0
> > May  8 17:22:36 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_IP:1_monitor_10000 (call=76, rc=0, cib-update=233,
> > confirmed=false) ok
> > May  8 17:22:36 vm-1 crmd: [3011]: notice: run_graph: ==== Transition
> > 16 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0,
> > Source=/var/lib/pengine/pe-input-2180.bz2): Complete
> > May  8 17:22:36 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
> > cause=C_FSA_INTERNAL origin=notify_crmd ]
> > May  8 17:22:36 vm-1 kernel: [  544.424174] ocfs2: End replay journal
> > (node 930785472, slot 0) on device (147,1)
> > May  8 17:22:36 vm-1 kernel: [  544.424468] ocfs2: Beginning quota
> > recovery on device (147,1) for slot 0
> > May  8 17:22:36 vm-1 kernel: [  544.424697] ocfs2: Finishing quota
> > recovery on device (147,1) for slot 0
> > May  8 17:22:46 vm-1 corosync[2971]:   [pcmk  ] notice:
> > pcmk_peer_update: Transitional membership event on ring 7420: memb=1,
> > new=0, lost=0
> > May  8 17:22:46 vm-1 corosync[2971]:   [pcmk  ] info:
> > pcmk_peer_update: memb: vm-1 2138745024
> > May  8 17:22:46 vm-1 corosync[2971]:   [pcmk  ] notice:
> > pcmk_peer_update: Stable membership event on ring 7420: memb=2, new=1,
> > lost=0
> > May  8 17:22:46 vm-1 corosync[2971]:   [pcmk  ] info: update_member:
> > Node 930785472/vm-2 is now: member
> > May  8 17:22:46 vm-1 corosync[2971]:   [pcmk  ] info:
> > pcmk_peer_update: NEW:  vm-2 930785472
> > May  8 17:22:46 vm-1 corosync[2971]:   [pcmk  ] info:
> > pcmk_peer_update: MEMB: vm-1 2138745024
> > May  8 17:22:46 vm-1 corosync[2971]:   [pcmk  ] info:
> > pcmk_peer_update: MEMB: vm-2 930785472
> > May  8 17:22:46 vm-1 corosync[2971]:   [pcmk  ] info:
> > send_member_notification: Sending membership update 7420 to 4 children
> > May  8 17:22:46 vm-1 cib: [3006]: notice: ais_dispatch_message:
> > Membership 7420: quorum acquired
> > May  8 17:22:46 vm-1 ocfs2_controld: [3654]: notice:
> > ais_dispatch_message: Membership 7420: quorum acquired
> > May  8 17:22:46 vm-1 cib: [3006]: info: crm_update_peer: Node vm-2:
> > id=930785472 state=member (new) addr=r(0) ip(192.168.122.183) votes=1
> > born=7412 seen=7420 proc=00000000000000000000000000111312
> > May  8 17:22:46 vm-1 ocfs2_controld: [3654]: info: crm_update_peer:
> > Node vm-2: id=930785472 state=member (new) addr=r(0)
> > ip(192.168.122.183)  votes=1 born=7412 seen=7420
> > proc=00000000000000000000000000111312
> > May  8 17:22:46 vm-1 crmd: [3011]: notice: ais_dispatch_message:
> > Membership 7420: quorum acquired
> > May  8 17:22:46 vm-1 cluster-dlm: [3605]: notice:
> > ais_dispatch_message: Membership 7420: quorum acquired
> > May  8 17:22:46 vm-1 corosync[2971]:   [TOTEM ] A processor joined or
> > left the membership and a new membership was formed.
> > May  8 17:22:46 vm-1 crmd: [3011]: notice: crmd_peer_update: Status
> > update: Client vm-2/crmd now has status [online] (DC=true)
> > May  8 17:22:46 vm-1 cluster-dlm: [3605]: info: crm_update_peer: Node
> > vm-2: id=930785472 state=member (new) addr=r(0) ip(192.168.122.183)
> > votes=1 born=7412 seen=7420 proc=00000000000000000000000000111312
> > May  8 17:22:46 vm-1 corosync[2971]:   [pcmk  ] info: update_member:
> > 0x1c6a520 Node 930785472 (vm-2) born on: 7420
> > May  8 17:22:46 vm-1 corosync[2971]:   [pcmk  ] info:
> > send_member_notification: Sending membership update 7420 to 4 children
> > May  8 17:22:46 vm-1 ocfs2_controld: [3654]: info:
> > ais_dispatch_message: Membership 7420: quorum retained
> > May  8 17:22:46 vm-1 crmd: [3011]: info: ais_status_callback: status:
> > vm-2 is now member (was lost)
> > May  8 17:22:46 vm-1 cib: [3006]: info: ais_dispatch_message:
> > Membership 7420: quorum retained
> > May  8 17:22:46 vm-1 cluster-dlm: [3605]: info: ais_dispatch_message:
> > Membership 7420: quorum retained
> > May  8 17:22:46 vm-1 crmd: [3011]: WARN: match_down_event: No match
> > for shutdown action on vm-2
> > May  8 17:22:46 vm-1 crmd: [3011]: info: erase_status_tag: Deleting
> > xpath: //node_state[@uname='vm-2']/lrm
> > May  8 17:22:46 vm-1 crmd: [3011]: info: erase_status_tag: Deleting
> > xpath: //node_state[@uname='vm-2']/transient_attributes
> > May  8 17:22:46 vm-1 crmd: [3011]: info: crm_update_peer: Node vm-2:
> > id=930785472 state=member (new) addr=r(0) ip(192.168.122.183) votes=1
> > born=7412 seen=7420 proc=00000000000000000000000000111312 (new)
> > May  8 17:22:46 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_delete for section //node_state[@uname='vm-2']/lrm
> > (origin=local/crmd/235, version=0.593.620): ok (rc=0)
> > May  8 17:22:46 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_delete for section
> > //node_state[@uname='vm-2']/transient_attributes
> > (origin=local/crmd/236, version=0.593.621): ok (rc=0)
> > May  8 17:22:46 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section nodes (origin=local/crmd/237,
> > version=0.593.622): ok (rc=0)
> > May  8 17:22:46 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section cib (origin=local/crmd/239,
> > version=0.593.624): ok (rc=0)
> > May  8 17:22:46 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_sync_one for section 'all' (origin=vm-2/vm-2/(null),
> > version=0.593.624): ok (rc=0)
> > May  8 17:22:46 vm-1 crmd: [3011]: info: crmd_ais_dispatch: Setting
> > expected votes to 2
> > May  8 17:22:46 vm-1 crmd: [3011]: info: ais_dispatch_message:
> > Membership 7420: quorum retained
> > May  8 17:22:46 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section crm_config (origin=local/crmd/241,
> > version=0.593.625): ok (rc=0)
> > May  8 17:22:46 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section nodes (origin=local/crmd/242,
> > version=0.593.626): ok (rc=0)
> > May  8 17:22:46 vm-1 crmd: [3011]: info: crmd_ais_dispatch: Setting
> > expected votes to 2
> > May  8 17:22:46 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN
> > cause=C_FSA_INTERNAL origin=crmd_peer_update ]
> > May  8 17:22:46 vm-1 crmd: [3011]: info: abort_transition_graph:
> > do_te_invoke:169 - Triggered transition abort (complete=1) : Peer Halt
> > May  8 17:22:46 vm-1 crmd: [3011]: info: join_make_offer: Making join
> > offers based on membership 7420
> > May  8 17:22:46 vm-1 crmd: [3011]: info: do_dc_join_offer_all:
> > join-11: Waiting on 2 outstanding join acks
> > May  8 17:22:46 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section crm_config (origin=local/crmd/245,
> > version=0.593.628): ok (rc=0)
> > May  8 17:22:46 vm-1 crmd: [3011]: info: update_dc: Set DC to vm-1
> > (3.0.6)
> > May  8 17:22:46 vm-1 corosync[2971]:   [CPG   ] chosen downlist:
> > sender r(0) ip(192.168.122.183) ; members(old:1 left:0)
> > May  8 17:22:46 vm-1 corosync[2971]:   [MAIN  ] Completed service
> > synchronization, ready to provide service.
> > May  8 17:22:47 vm-1 crmd: [3011]: info: do_dc_join_offer_all: A new
> > node joined the cluster
> > May  8 17:22:47 vm-1 crmd: [3011]: info: do_dc_join_offer_all:
> > join-12: Waiting on 2 outstanding join acks
> > May  8 17:22:47 vm-1 crmd: [3011]: info: update_dc: Set DC to vm-1
> > (3.0.6)
> > May  8 17:22:48 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED
> > cause=C_FSA_INTERNAL origin=check_join_state ]
> > May  8 17:22:48 vm-1 crmd: [3011]: info: do_dc_join_finalize: join-12:
> > Syncing the CIB from vm-1 to the rest of the cluster
> > May  8 17:22:48 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_sync for section 'all' (origin=local/crmd/248,
> > version=0.593.628): ok (rc=0)
> > May  8 17:22:48 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section nodes (origin=local/crmd/249,
> > version=0.593.629): ok (rc=0)
> > May  8 17:22:48 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section nodes (origin=local/crmd/250,
> > version=0.593.630): ok (rc=0)
> > May  8 17:22:48 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_delete for section
> > //node_state[@uname='vm-2']/transient_attributes (origin=vm-2/crmd/6,
> > version=0.593.631): ok (rc=0)
> > May  8 17:22:48 vm-1 crmd: [3011]: info: do_dc_join_ack: join-12:
> > Updating node state to member for vm-2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: erase_status_tag: Deleting
> > xpath: //node_state[@uname='vm-2']/lrm
> > May  8 17:22:48 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_delete for section //node_state[@uname='vm-2']/lrm
> > (origin=local/crmd/251, version=0.593.632): ok (rc=0)
> > May  8 17:22:48 vm-1 crmd: [3011]: info: do_dc_join_ack: join-12:
> > Updating node state to member for vm-1
> > May  8 17:22:48 vm-1 crmd: [3011]: info: erase_status_tag: Deleting
> > xpath: //node_state[@uname='vm-1']/lrm
> > May  8 17:22:48 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED
> > cause=C_FSA_INTERNAL origin=check_join_state ]
> > May  8 17:22:48 vm-1 crmd: [3011]: info: abort_transition_graph:
> > do_te_invoke:162 - Triggered transition abort (complete=1) : Peer
> > Cancelled
> > May  8 17:22:48 vm-1 attrd: [3009]: notice: attrd_local_callback:
> > Sending full refresh (origin=crmd)
> > May  8 17:22:48 vm-1 attrd: [3009]: notice: attrd_trigger_update:
> > Sending flush op to all hosts for: master-p_drbd_wwwdata:0 (10000)
> > May  8 17:22:48 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_delete for section //node_state[@uname='vm-1']/lrm
> > (origin=local/crmd/253, version=0.593.634): ok (rc=0)
> > May  8 17:22:48 vm-1 crmd: [3011]: info: abort_transition_graph:
> > te_update_diff:320 - Triggered transition abort (complete=1,
> > tag=lrm_rsc_op, id=p_drbd_wwwdata:0_last_0,
> > magic=0:0;19:4:0:9ba781e6-a6db-4310-bddd-8740dea7b7d9, cib=0.593.634)
> > : Resource op removal
> > May  8 17:22:48 vm-1 crmd: [3011]: info: abort_transition_graph:
> > te_update_diff:276 - Triggered transition abort (complete=1, tag=diff,
> > id=(null), magic=NA, cib=0.593.635) : LRM Refresh
> > May  8 17:22:48 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section nodes (origin=local/crmd/255,
> > version=0.593.636): ok (rc=0)
> > May  8 17:22:48 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_modify for section cib (origin=local/crmd/257,
> > version=0.593.638): ok (rc=0)
> > May  8 17:22:48 vm-1 attrd: [3009]: notice: attrd_trigger_update:
> > Sending flush op to all hosts for: probe_complete (true)
> > May  8 17:22:48 vm-1 pengine: [3010]: notice: unpack_config: On loss
> > of CCM Quorum: Ignore
> > May  8 17:22:48 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_fence_N1#011(vm-2)
> > May  8 17:22:48 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_drbd_wwwdata:1#011(vm-2)
> > May  8 17:22:48 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_controld_dlm:0#011(vm-2 - blocked)
> > May  8 17:22:48 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_o2cb:0#011(vm-2 - blocked)
> > May  8 17:22:48 vm-1 pengine: [3010]: notice: LogActions: Start
> > WebFS:0#011(vm-2 - blocked)
> > May  8 17:22:48 vm-1 pengine: [3010]: notice: LogActions: Restart
> > p_apache:0#011(Started vm-1)

The cluster wants to restart apache after starting everything else on vm-2, probably due to interleave setting.

> > May  8 17:22:48 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_apache:1#011(vm-2)
> > May  8 17:22:48 vm-1 pengine: [3010]: notice: LogActions: Move
> > p_IP:1#011(Started vm-1 -> vm-2)

The cluster wants to move one instance of the IP back to vm-2.

> > May  8 17:22:48 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS
> > cause=C_IPC_MESSAGE origin=handle_response ]
> > May  8 17:22:48 vm-1 crmd: [3011]: info: do_te_invoke: Processing
> > graph 17 (ref=pe_calc-dc-1431098568-248) derived from
> > /var/lib/pengine/pe-input-2181.bz2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 12: monitor p_fence_N1_monitor_0 on vm-2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 13: monitor p_fence_N2_monitor_0 on vm-2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 14: monitor p_drbd_wwwdata:1_monitor_0 on vm-2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 15: monitor p_controld_dlm:0_monitor_0 on vm-2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 16: monitor p_o2cb:0_monitor_0 on vm-2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 17: monitor WebFS:0_monitor_0 on vm-2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 18: monitor p_apache:1_monitor_0 on vm-2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 19: monitor p_IP:0_monitor_0 on vm-2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 20: monitor p_IP:1_monitor_0 on vm-2
> > May  8 17:22:48 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 101: notify p_drbd_wwwdata:0_pre_notify_start_0 on vm-1 (local)
> > May  8 17:22:48 vm-1 lrmd: [3008]: info: rsc:p_drbd_wwwdata:0
> > notify[77] (pid 12386)
> > May  8 17:22:48 vm-1 pengine: [3010]: notice: process_pe_message:
> > Transition 17: PEngine Input stored in:
> > /var/lib/pengine/pe-input-2181.bz2
> > May  8 17:22:48 vm-1 lrmd: [3008]: info: RA output:
> > (p_drbd_wwwdata:0:notify:stdout)
> > May  8 17:22:48 vm-1 lrmd: [3008]: info: operation notify[77] on
> > p_drbd_wwwdata:0 for client 3011: pid 12386 exited with return code 0
> > May  8 17:22:48 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_drbd_wwwdata:0_notify_0 (call=77, rc=0, cib-update=0,
> > confirmed=true) ok
> > May  8 17:22:49 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 11: probe_complete probe_complete on vm-2 - no waiting
> > May  8 17:22:49 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 21: start p_fence_N1_start_0 on vm-2
> > May  8 17:22:49 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 29: start p_drbd_wwwdata:1_start_0 on vm-2
> > May  8 17:22:49 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 87: stop p_IP:0_stop_0 on vm-1 (local)
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: cancel_op: operation
> > monitor[73] on p_IP:0 for client 3011, its parameters:
> > resource-stickiness=[0] CRM_meta_timeout=[240000]
> > CRM_meta_name=[monitor] crm_feature_set=[3.0.6]
> > CRM_meta_notify=[false] CRM_meta_clone_node_max=[2] CRM_meta_clone=[0]
> > clusterip_hash=[sourceip-sourceport] CRM_meta_clone_max=[2]
> > CRM_meta_interval=[10000] ip=[192.168.122.200] nic=[eth0]
> > CRM_meta_globally_unique=[true] cancelled
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: rsc:p_IP:0 stop[78] (pid 12416)
> > May  8 17:22:49 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 89: stop p_IP:1_stop_0 on vm-1 (local)
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: cancel_op: operation
> > monitor[76] on p_IP:1 for client 3011, its parameters:
> > resource-stickiness=[0] CRM_meta_timeout=[240000]
> > CRM_meta_name=[monitor] crm_feature_set=[3.0.6]
> > CRM_meta_notify=[false] CRM_meta_clone_node_max=[2] CRM_meta_clone=[1]
> > clusterip_hash=[sourceip-sourceport] CRM_meta_clone_max=[2]
> > CRM_meta_interval=[10000] ip=[192.168.122.200] nic=[eth0]
> > CRM_meta_globally_unique=[true] cancelled
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: rsc:p_IP:1 stop[79] (pid 12417)
> > May  8 17:22:49 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_IP:0_monitor_10000 (call=73, status=1, cib-update=0,
> > confirmed=true) Cancelled
> > May  8 17:22:49 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_IP:1_monitor_10000 (call=76, status=1, cib-update=0,
> > confirmed=true) Cancelled
> > May  8 17:22:49 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 102: notify p_drbd_wwwdata:0_post_notify_start_0 on vm-1 (local)
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: rsc:p_drbd_wwwdata:0
> > notify[80] (pid 12542)
> > May  8 17:22:49 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 103: notify p_drbd_wwwdata:1_post_notify_start_0 on vm-2
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: RA output:
> > (p_drbd_wwwdata:0:notify:stdout)
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: operation notify[80] on
> > p_drbd_wwwdata:0 for client 3011: pid 12542 exited with return code 0
> > May  8 17:22:49 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_drbd_wwwdata:0_notify_0 (call=80, rc=0, cib-update=0,
> > confirmed=true) ok
> > May  8 17:22:49 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 30: monitor p_drbd_wwwdata:1_monitor_60000 on vm-2
> > May  8 17:22:49 vm-1 IPaddr2[12416]: INFO: Sleeping until
> > /var/run/resource-agents/IPaddr2-CIP-192.168.122.200 is released...
> > May  8 17:22:49 vm-1 IPaddr2[12417]: INFO: IP status = ok, IP_CIP=yes
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: operation stop[79] on p_IP:1
> > for client 3011: pid 12417 exited with return code 0
> > May  8 17:22:49 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_IP:1_stop_0 (call=79, rc=0, cib-update=261,
> > confirmed=true) ok
> > May  8 17:22:49 vm-1 IPaddr2[12416]: INFO: IP status = ok, IP_CIP=yes
> > May  8 17:22:49 vm-1 IPaddr2[12416]: INFO: 192.168.122.200,
> > sourceip-sourceport
> > May  8 17:22:49 vm-1 IPaddr2[12416]: INFO: 1
> > May  8 17:22:49 vm-1 IPaddr2[12416]: INFO: 2
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: RA output:
> > (p_IP:0:stop:stderr) iptables: No chain/target/match by that name.
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: operation stop[78] on p_IP:0
> > for client 3011: pid 12416 exited with return code 0
> > May  8 17:22:49 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_IP:0_stop_0 (call=78, rc=0, cib-update=262,
> > confirmed=true) ok
> > May  8 17:22:49 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 79: stop p_apache:0_stop_0 on vm-1 (local)
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: cancel_op: operation
> > monitor[71] on p_apache:0 for client 3011, its parameters:
> > CRM_meta_start_delay=[0] CRM_meta_timeout=[20000]
> > CRM_meta_name=[monitor] crm_feature_set=[3.0.6]
> > CRM_meta_clone_node_max=[1] configfile=[/etc/apache2/apache2.conf]
> > CRM_meta_clone=[0] CRM_meta_interval=[60000] CRM_meta_clone_max=[2]
> > CRM_meta_notify=[false] statusurl=[http://localhost/server-status]
> > CRM_meta_globally_unique=[false]  cancelled
> > May  8 17:22:49 vm-1 lrmd: [3008]: info: rsc:p_apache:0 stop[81] (pid
> > 12624)
> > May  8 17:22:49 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_apache:0_monitor_60000 (call=71, status=1, cib-update=0,
> > confirmed=true) Cancelled
> > May  8 17:22:50 vm-1 kernel: [  558.096196] block drbd1: Handshake
> > successful: Agreed network protocol version 96
> > May  8 17:22:50 vm-1 kernel: [  558.096204] block drbd1: conn(
> > WFConnection -> WFReportParams )
> > May  8 17:22:50 vm-1 kernel: [  558.096220] block drbd1: Starting
> > asender thread (from drbd1_receiver [3403])
> > May  8 17:22:50 vm-1 kernel: [  558.097865] block drbd1:
> > data-integrity-alg: <not-used>
> > May  8 17:22:50 vm-1 kernel: [  558.097901] block drbd1:
> > drbd_sync_handshake:
> > May  8 17:22:50 vm-1 kernel: [  558.097913] block drbd1: self
> > 1D8582A875D72EB5:098FF992CBAA32F1:248E28A17C6EBF3D:248D28A17C6EBF3D
> > bits:4 flags:0
> > May  8 17:22:50 vm-1 kernel: [  558.097925] block drbd1: peer
> > 098FF992CBAA32F0:0000000000000000:248E28A17C6EBF3C:248D28A17C6EBF3D
> > bits:3072 flags:2
> > May  8 17:22:50 vm-1 kernel: [  558.097935] block drbd1:
> > uuid_compare()=1 by rule 70
> > May  8 17:22:50 vm-1 kernel: [  558.097951] block drbd1: peer( Unknown
> > -> Secondary ) conn( WFReportParams -> WFBitMapS ) pdsk( DUnknown ->
> > Consistent )
> > May  8 17:22:50 vm-1 kernel: [  558.102631] block drbd1: helper
> > command: /sbin/drbdadm before-resync-source minor-1
> > May  8 17:22:50 vm-1 kernel: [  558.107486] block drbd1: helper
> > command: /sbin/drbdadm before-resync-source minor-1 exit code 0 (0x0)
> > May  8 17:22:50 vm-1 kernel: [  558.107503] block drbd1: conn(
> > WFBitMapS -> SyncSource ) pdsk( Consistent -> Inconsistent )
> > May  8 17:22:50 vm-1 kernel: [  558.107519] block drbd1: Began resync
> > as SyncSource (will sync 12288 KB [3072 bits set]).
> > May  8 17:22:50 vm-1 kernel: [  558.107587] block drbd1: updated sync
> > UUID 1D8582A875D72EB5:0990F992CBAA32F1:098FF992CBAA32F1:248E28A17C6EBF3D
> > May  8 17:22:50 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 22: monitor p_fence_N1_monitor_60000 on vm-2
> > May  8 17:22:50 vm-1 kernel: [  558.639016] block drbd1: Resync done
> > (total 1 sec; paused 0 sec; 12288 K/sec)
> > May  8 17:22:50 vm-1 kernel: [  558.639025] block drbd1: updated UUIDs
> > 1D8582A875D72EB5:0000000000000000:0990F992CBAA32F1:098FF992CBAA32F1
> > May  8 17:22:50 vm-1 kernel: [  558.639032] block drbd1: conn(
> > SyncSource -> Connected ) pdsk( Inconsistent -> UpToDate )
> > May  8 17:22:50 vm-1 kernel: [  558.649075] block drbd1: bitmap WRITE
> > of 6 pages took 0 jiffies
> > May  8 17:22:50 vm-1 kernel: [  558.650039] block drbd1: 0 KB (0 bits)
> > marked out-of-sync by on disk bit-map.
> > May  8 17:22:50 vm-1 lrmd: [3008]: info: RA output:
> > (p_apache:0:stop:stderr) /usr/lib/ocf/resource.d//heartbeat/apache:
> > 440: kill:
> > May  8 17:22:50 vm-1 lrmd: [3008]: info: RA output:
> > (p_apache:0:stop:stderr) No such process
> > May  8 17:22:50 vm-1 lrmd: [3008]: info: RA output:
> > (p_apache:0:stop:stderr)
> > May  8 17:22:50 vm-1 apache[12624]: INFO: Killing apache PID 11507
> > May  8 17:22:50 vm-1 apache[12624]: INFO: apache stopped.
> > May  8 17:22:50 vm-1 lrmd: [3008]: info: operation stop[81] on
> > p_apache:0 for client 3011: pid 12624 exited with return code 0
> > May  8 17:22:50 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_apache:0_stop_0 (call=81, rc=0, cib-update=263,
> > confirmed=true) ok
> > May  8 17:22:50 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 80: start p_apache:0_start_0 on vm-1 (local)
> > May  8 17:22:50 vm-1 lrmd: [3008]: info: rsc:p_apache:0 start[82] (pid
> > 12670)
> > May  8 17:22:50 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 81: start p_apache:1_start_0 on vm-2
> > May  8 17:22:51 vm-1 apache[12670]: INFO: apache not running
> > May  8 17:22:51 vm-1 apache[12670]: INFO: waiting for apache
> > /etc/apache2/apache2.conf to come up
> > May  8 17:22:51 vm-1 crmd: [3011]: info: abort_transition_graph:
> > te_update_diff:176 - Triggered transition abort (complete=0,
> > tag=nvpair, id=status-vm-2-master-p_drbd_wwwdata.1,
> > name=master-p_drbd_wwwdata:1, value=1000, magic=NA, cib=0.593.657) :
> > Transient attribute: update
> > May  8 17:22:52 vm-1 lrmd: [3008]: info: operation start[82] on
> > p_apache:0 for client 3011: pid 12670 exited with return code 0
> > May  8 17:22:52 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_apache:0_start_0 (call=82, rc=0, cib-update=264,
> > confirmed=true) ok
> > May  8 17:22:52 vm-1 crmd: [3011]: notice: run_graph: ==== Transition
> > 17 (Complete=42, Pending=0, Fired=0, Skipped=3, Incomplete=1,
> > Source=/var/lib/pengine/pe-input-2181.bz2): Stopped
> > May  8 17:22:52 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC
> > cause=C_FSA_INTERNAL origin=notify_crmd ]
> > May  8 17:22:52 vm-1 pengine: [3010]: notice: unpack_config: On loss
> > of CCM Quorum: Ignore
> > May  8 17:22:52 vm-1 pengine: [3010]: notice: LogActions: Promote
> > p_drbd_wwwdata:1#011(Slave -> Master vm-2)
> > May  8 17:22:52 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_controld_dlm:0#011(vm-2)
> > May  8 17:22:52 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_o2cb:0#011(vm-2)
> > May  8 17:22:52 vm-1 pengine: [3010]: notice: LogActions: Start
> > WebFS:0#011(vm-2)
> > May  8 17:22:52 vm-1 pengine: [3010]: notice: LogActions: Restart
> > p_apache:0#011(Started vm-1)
> > May  8 17:22:52 vm-1 pengine: [3010]: notice: LogActions: Restart
> > p_apache:1#011(Started vm-2)
> > May  8 17:22:52 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_IP:0#011(vm-1)
> > May  8 17:22:52 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_IP:1#011(vm-2)

Above, vm-2 is promoted from slave to master for the DRBD resource, and the ordering constraint defines how the remaining resources are started on vm-2. Not all resources, but just apache (due to interleave=false) and the IP (due to ordering with apache) are restarting on vm-1.

> > May  8 17:22:52 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS
> > cause=C_IPC_MESSAGE origin=handle_response ]
> > May  8 17:22:52 vm-1 crmd: [3011]: info: do_te_invoke: Processing
> > graph 18 (ref=pe_calc-dc-1431098572-273) derived from
> > /var/lib/pengine/pe-input-2182.bz2
> > May  8 17:22:52 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 6: cancel p_drbd_wwwdata:1_monitor_60000 on vm-2
> > May  8 17:22:52 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 100: notify p_drbd_wwwdata:0_pre_notify_promote_0 on vm-1 (local)
> > May  8 17:22:52 vm-1 lrmd: [3008]: info: rsc:p_drbd_wwwdata:0
> > notify[83] (pid 12776)
> > May  8 17:22:52 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 102: notify p_drbd_wwwdata:1_pre_notify_promote_0 on vm-2
> > May  8 17:22:52 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 71: stop p_apache:0_stop_0 on vm-1 (local)
> > May  8 17:22:52 vm-1 lrmd: [3008]: info: rsc:p_apache:0 stop[84] (pid
> > 12777)
> > May  8 17:22:52 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 74: stop p_apache:1_stop_0 on vm-2
> > May  8 17:22:52 vm-1 pengine: [3010]: notice: process_pe_message:
> > Transition 18: PEngine Input stored in:
> > /var/lib/pengine/pe-input-2182.bz2
> > May  8 17:22:52 vm-1 lrmd: [3008]: info: operation notify[83] on
> > p_drbd_wwwdata:0 for client 3011: pid 12776 exited with return code 0
> > May  8 17:22:52 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_drbd_wwwdata:0_notify_0 (call=83, rc=0, cib-update=0,
> > confirmed=true) ok
> > May  8 17:22:52 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 22: promote p_drbd_wwwdata:1_promote_0 on vm-2
> > May  8 17:22:52 vm-1 kernel: [  560.076628] block drbd1: peer(
> > Secondary -> Primary )
> > May  8 17:22:52 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 101: notify p_drbd_wwwdata:0_post_notify_promote_0 on vm-1 (local)
> > May  8 17:22:52 vm-1 lrmd: [3008]: info: rsc:p_drbd_wwwdata:0
> > notify[85] (pid 12831)
> > May  8 17:22:52 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 103: notify p_drbd_wwwdata:1_post_notify_promote_0 on vm-2
> > May  8 17:22:52 vm-1 lrmd: [3008]: info: RA output:
> > (p_drbd_wwwdata:0:notify:stdout)
> > May  8 17:22:52 vm-1 lrmd: [3008]: info: operation notify[85] on
> > p_drbd_wwwdata:0 for client 3011: pid 12831 exited with return code 0
> > May  8 17:22:52 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_drbd_wwwdata:0_notify_0 (call=85, rc=0, cib-update=0,
> > confirmed=true) ok
> > May  8 17:22:52 vm-1 crmd: [3011]: info: abort_transition_graph:
> > te_update_diff:176 - Triggered transition abort (complete=0,
> > tag=nvpair, id=status-vm-2-master-p_drbd_wwwdata.1,
> > name=master-p_drbd_wwwdata:1, value=10000, magic=NA, cib=0.593.665) :
> > Transient attribute: update
> > May  8 17:22:53 vm-1 lrmd: [3008]: info: RA output:
> > (p_apache:0:stop:stderr) /usr/lib/ocf/resource.d//heartbeat/apache:
> > 440: kill:
> > May  8 17:22:53 vm-1 lrmd: [3008]: info: RA output:
> > (p_apache:0:stop:stderr) No such process
> > May  8 17:22:53 vm-1 lrmd: [3008]: info: RA output:
> > (p_apache:0:stop:stderr)
> > May  8 17:22:53 vm-1 apache[12777]: INFO: Killing apache PID 12701
> > May  8 17:22:53 vm-1 apache[12777]: INFO: apache stopped.
> > May  8 17:22:53 vm-1 lrmd: [3008]: info: operation stop[84] on
> > p_apache:0 for client 3011: pid 12777 exited with return code 0
> > May  8 17:22:53 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_apache:0_stop_0 (call=84, rc=0, cib-update=266,
> > confirmed=true) ok
> > May  8 17:22:53 vm-1 crmd: [3011]: notice: run_graph: ==== Transition
> > 18 (Complete=16, Pending=0, Fired=0, Skipped=20, Incomplete=5,
> > Source=/var/lib/pengine/pe-input-2182.bz2): Stopped
> > May  8 17:22:53 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC
> > cause=C_FSA_INTERNAL origin=notify_crmd ]
> > May  8 17:22:53 vm-1 pengine: [3010]: notice: unpack_config: On loss
> > of CCM Quorum: Ignore
> > May  8 17:22:53 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_controld_dlm:0#011(vm-2)
> > May  8 17:22:53 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_o2cb:0#011(vm-2)
> > May  8 17:22:53 vm-1 pengine: [3010]: notice: LogActions: Start
> > WebFS:0#011(vm-2)
> > May  8 17:22:53 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_apache:0#011(vm-1)
> > May  8 17:22:53 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_apache:1#011(vm-2)
> > May  8 17:22:53 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_IP:0#011(vm-1)
> > May  8 17:22:53 vm-1 pengine: [3010]: notice: LogActions: Start
> > p_IP:1#011(vm-2)
> > May  8 17:22:53 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS
> > cause=C_IPC_MESSAGE origin=handle_response ]
> > May  8 17:22:53 vm-1 crmd: [3011]: info: do_te_invoke: Processing
> > graph 19 (ref=pe_calc-dc-1431098573-284) derived from
> > /var/lib/pengine/pe-input-2183.bz2
> > May  8 17:22:53 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 6: cancel p_drbd_wwwdata:1_monitor_60000 on vm-2
> > May  8 17:22:53 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 47: start p_controld_dlm:0_start_0 on vm-2
> > May  8 17:22:53 vm-1 cib: [3006]: info: cib_process_request: Operation
> > complete: op cib_delete for section
> > //node_state[@uname='vm-2']//lrm_resource[@id='p_drbd_wwwdata:1']/lrm_rsc_op[@id='p_drbd_wwwdata:1_monitor_60000']
> > (origin=vm-2/crmd/24, version=0.593.668): ok (rc=0)
> > May  8 17:22:53 vm-1 pengine: [3010]: notice: process_pe_message:
> > Transition 19: PEngine Input stored in:
> > /var/lib/pengine/pe-input-2183.bz2
> > May  8 17:22:53 vm-1 stonith-ng: [3007]: info: stonith_command:
> > Processed st_execute from lrmd: rc=-1
> > May  8 17:22:53 vm-1 external/libvirt[12878]: [12893]: notice:
> > qemu+tcp://192.168.122.1/system: Running hypervisor: QEMU 2.0.0
> > May  8 17:22:54 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 48: monitor p_controld_dlm:0_monitor_120000 on vm-2
> > May  8 17:22:54 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 55: start p_o2cb:0_start_0 on vm-2
> > May  8 17:22:54 vm-1 stonith: [12871]: info: external/libvirt device OK.
> > May  8 17:22:54 vm-1 stonith-ng: [3007]: info: log_operation:
> > p_fence_N2: Performing: stonith -t external/libvirt -S
> > May  8 17:22:54 vm-1 stonith-ng: [3007]: info: log_operation:
> > p_fence_N2: success:  0
> > May  8 17:22:56 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 56: monitor p_o2cb:0_monitor_10000 on vm-2
> > May  8 17:22:56 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 63: start WebFS:0_start_0 on vm-2
> > May  8 17:22:56 vm-1 kernel: [  564.278505] dlm: got connection from
> > 930785472
> > May  8 17:22:56 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 64: monitor WebFS:0_monitor_10000 on vm-2
> > May  8 17:22:56 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 71: start p_apache:0_start_0 on vm-1 (local)
> > May  8 17:22:56 vm-1 lrmd: [3008]: info: rsc:p_apache:0 start[86] (pid
> > 12894)
> > May  8 17:22:56 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 73: start p_apache:1_start_0 on vm-2
> > May  8 17:22:56 vm-1 apache[12894]: INFO: apache not running
> > May  8 17:22:56 vm-1 apache[12894]: INFO: waiting for apache
> > /etc/apache2/apache2.conf to come up
> > May  8 17:22:57 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 74: monitor p_apache:1_monitor_60000 on vm-2
> > May  8 17:22:57 vm-1 lrmd: [3008]: info: operation start[86] on
> > p_apache:0 for client 3011: pid 12894 exited with return code 0
> > May  8 17:22:57 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_apache:0_start_0 (call=86, rc=0, cib-update=268,
> > confirmed=true) ok
> > May  8 17:22:57 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 72: monitor p_apache:0_monitor_60000 on vm-1 (local)
> > May  8 17:22:57 vm-1 lrmd: [3008]: info: rsc:p_apache:0 monitor[87]
> > (pid 13000)
> > May  8 17:22:57 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 79: start p_IP:0_start_0 on vm-1 (local)
> > May  8 17:22:57 vm-1 lrmd: [3008]: info: rsc:p_IP:0 start[88] (pid 13001)
> > May  8 17:22:57 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 81: start p_IP:1_start_0 on vm-2
> > May  8 17:22:57 vm-1 lrmd: [3008]: info: operation monitor[87] on
> > p_apache:0 for client 3011: pid 13000 exited with return code 0
> > May  8 17:22:57 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_apache:0_monitor_60000 (call=87, rc=0, cib-update=269,
> > confirmed=false) ok
> > May  8 17:22:57 vm-1 IPaddr2[13001]: INFO: ip -f inet addr add
> > 192.168.122.200/32 brd 192.168.122.200 dev eth0
> > May  8 17:22:57 vm-1 IPaddr2[13001]: INFO: ip link set eth0 up
> > May  8 17:22:57 vm-1 IPaddr2[13001]: INFO: /usr/lib/heartbeat/send_arp
> > -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.122.200 eth0
> > 192.168.122.200 8bbd5f0b558f not_used not_used
> > May  8 17:22:57 vm-1 lrmd: [3008]: info: operation start[88] on p_IP:0
> > for client 3011: pid 13001 exited with return code 0
> > May  8 17:22:57 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_IP:0_start_0 (call=88, rc=0, cib-update=270,
> > confirmed=true) ok
> > May  8 17:22:57 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 80: monitor p_IP:0_monitor_10000 on vm-1 (local)
> > May  8 17:22:57 vm-1 lrmd: [3008]: info: rsc:p_IP:0 monitor[89] (pid
> > 13103)
> > May  8 17:22:57 vm-1 lrmd: [3008]: info: operation monitor[89] on
> > p_IP:0 for client 3011: pid 13103 exited with return code 0
> > May  8 17:22:57 vm-1 crmd: [3011]: info: process_lrm_event: LRM
> > operation p_IP:0_monitor_10000 (call=89, rc=0, cib-update=271,
> > confirmed=false) ok
> > May  8 17:22:58 vm-1 crmd: [3011]: info: te_rsc_command: Initiating
> > action 82: monitor p_IP:1_monitor_10000 on vm-2
> > May  8 17:22:58 vm-1 crmd: [3011]: notice: run_graph: ==== Transition
> > 19 (Complete=25, Pending=0, Fired=0, Skipped=0, Incomplete=0,
> > Source=/var/lib/pengine/pe-input-2183.bz2): Complete
> > May  8 17:22:58 vm-1 crmd: [3011]: notice: do_state_transition: State
> > transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
> > cause=C_FSA_INTERNAL origin=notify_crmd ]



More information about the Users mailing list