[Pacemaker] 2-node cluster doesn't move resources away from a failed node

David Guyot david.guyot at europecamions-interactive.com
Thu Jul 5 10:12:37 EDT 2012


Hello, everybody.

As the title suggests, I'm configuring a 2-node cluster but I've got a
strange issue here : when I put a node in standby mode, using "crm node
standby", its resources are correctly moved to the second node, and stay
there even if the first is back on-line, which I assume is the preferred
behavior (preferred by the designers of such systems) to avoid having
resources on a potentially unstable node. Nevertheless, when I simulate
failure of the node which run resources by "/etc/init.d/corosync stop",
the other node correctly fence the failed node by electrically resetting
it, but it doesn't mean that it will mount resources on himself; rather,
it waits the failed node to be back on-line, and then re-negotiates
resource placement, which inevitably leads to the failed node restarting
the resources, which I suppose is a consequence of the resource
stickiness still recorded by the intact node : because this node still
assume that resources are running on the failed node, it assumes that
resources prefer to stay on the first node, even if it has failed.

When the first node, Vindemiatrix, has shuts down Corosync, the second,
Malastare, reports this :

root at Malastare:/home/david# crm_mon --one-shot -VrA
============
Last updated: Thu Jul  5 15:27:01 2012
Last change: Thu Jul  5 15:26:37 2012 via cibadmin on Malastare
Stack: openais
Current DC: Malastare - partition WITHOUT quorum
Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
2 Nodes configured, 2 expected votes
17 Resources configured.
============

Node Vindemiatrix: UNCLEAN (offline)
Online: [ Malastare ]

Full list of resources:

 soapi-fencing-malastare    (stonith:external/ovh):    Started Vindemiatrix
 soapi-fencing-vindemiatrix    (stonith:external/ovh):    Started Malastare
 Master/Slave Set: ms_drbd_svn [drbd_svn]
     Masters: [ Vindemiatrix ]
     Slaves: [ Malastare ]
 Master/Slave Set: ms_drbd_pgsql [drbd_pgsql]
     Masters: [ Vindemiatrix ]
     Slaves: [ Malastare ]
 Master/Slave Set: ms_drbd_backupvi [drbd_backupvi]
     Masters: [ Vindemiatrix ]
     Slaves: [ Malastare ]
 Master/Slave Set: ms_drbd_www [drbd_www]
     Masters: [ Vindemiatrix ]
     Slaves: [ Malastare ]
 fs_www    (ocf::heartbeat:Filesystem):    Started Vindemiatrix
 fs_pgsql    (ocf::heartbeat:Filesystem):    Started Vindemiatrix
 fs_svn    (ocf::heartbeat:Filesystem):    Started Vindemiatrix
 fs_backupvi    (ocf::heartbeat:Filesystem):    Started Vindemiatrix
 VirtualIP    (ocf::heartbeat:IPaddr2):    Started Vindemiatrix
 OVHvIP    (ocf::pacemaker:OVHvIP):    Started Vindemiatrix
 ProFTPd    (ocf::heartbeat:proftpd):    Started Vindemiatrix

Node Attributes:
* Node Malastare:
    + master-drbd_backupvi:0              : 10000    
    + master-drbd_pgsql:0                 : 10000    
    + master-drbd_svn:0                   : 10000    
    + master-drbd_www:0                   : 10000    

As you can see, the node failure is detected. This state leads to
attached log file.

Note that both ocf::pacemaker:OVHvIP and stonith:external/ovh are custom
resources which uses my server provider's SOAP API to provide intended
services. The STONITH agent does nothing but returning exit status 0
when start, stop, on or off actions are required, but returns the 2
nodes names when hostlist or gethosts actions are required and, when
reset action is required, effectively resets faulting node using the
provider API. As this API doesn't provide reliable mean to know the
exact moment of resetting, the STONITH agent pings the faulting node
every 5 seconds until ping fails, then forks a process which pings the
faulting node every 5 seconds until it answers, then, due to external
VPN being not yet installed by the provider, I'm forced to emulate it
with OpenVPN (which seems to be unable to re-establish a connection lost
minutes ago, leading to a dual brain situation), the STONITH agent
restarts OpenVPN to re-establish the connection, then restarts Corosync
and Pacemaker.

Aside from the VPN issue, of which I'm fully aware of performance and
stability issues, I thought that Pacemaker would, as soon as the STONITH
agent returns exit status 0, start the resources on the remaining node,
but it doesn't. Instead, it seems that the STONITH reset action waits
too long to report a successful reset, delay which reaches some internal
timeout, which in turn leads Pacemaker to assume that STONITH agent
failed, therefore, while eternally trying to reset the node (which only
leads to the API issuing an error because the last reset request was
less than 5 minutes ago, something forbidden) stopping actions without
restarting resources on the remaining node. I tried to search the
Internet to this parameter, but the only related thing I found is this
page
http://lists.linux-ha.org/pipermail/linux-ha/2010-March/039761.html, a
Linux-HA mailing list archive, which mentions a stonith-timeout
property, but I've parsed Pacemaker documentation without finding any
occurrence, and I got an error when I tried to get its value :

root at Vindemiatrix:/home/david# crm_attribute --name stonith-timeout --query
scope=crm_config  name=stonith-timeout value=(null)
Error performing operation: The object/attribute does not exist

So what did I miss? Do I must use this property which is not documented
nor present in the documentation? Or rewrite my STONITH agent to return
exit status 0 as soon as the API correctly considered the reset request
(contrary to what Linux-HA http://linux-ha.org/wiki/STONITH precise to
be necessary)? Or is there something else I missed?

Thank you now for having read this whole mail, and in advance for your help.

Kind regards.
-------------- next part --------------
Jul 05 15:26:58 corosync [TOTEM ] A processor failed, forming new configuration.
Jul 05 15:27:00 Malastare crmd: [10692]: notice: ais_dispatch_message: Membership 1552: quorum lost
Jul 05 15:27:00 Malastare crmd: [10692]: info: ais_status_callback: status: Vindemiatrix is now lost (was member)
Jul 05 15:27:00 Malastare cib: [10687]: notice: ais_dispatch_message: Membership 1552: quorum lost
Jul 05 15:27:00 Malastare crmd: [10692]: info: crm_update_peer: Node Vindemiatrix: id=33576970 state=lost (new) addr=r(0) ip(10.88.0.2)  votes=1 born=1540 seen=1548 proc=00000000000000000000000000111312
Jul 05 15:27:00 Malastare cib: [10687]: info: crm_update_peer: Node Vindemiatrix: id=33576970 state=lost (new) addr=r(0) ip(10.88.0.2)  votes=1 born=1540 seen=1548 proc=00000000000000000000000000111312
Jul 05 15:27:00 Malastare crmd: [10692]: WARN: check_dead_member: Our DC node (Vindemiatrix) left the cluster
Jul 05 15:27:00 corosync [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 1552: memb=1, new=0, lost=1
Jul 05 15:27:00 corosync [pcmk  ] info: pcmk_peer_update: memb: Malastare 16799754
Jul 05 15:27:00 corosync [pcmk  ] info: pcmk_peer_update: lost: Vindemiatrix 33576970
Jul 05 15:27:00 corosync [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 1552: memb=1, new=0, lost=0
Jul 05 15:27:00 corosync [pcmk  ] info: pcmk_peer_update: MEMB: Malastare 16799754
Jul 05 15:27:00 corosync [pcmk  ] info: ais_mark_unseen_peer_dead: Node Vindemiatrix was not seen in the previous transition
Jul 05 15:27:00 corosync [pcmk  ] info: update_member: Node 33576970/Vindemiatrix is now: lost
Jul 05 15:27:00 corosync [pcmk  ] info: send_member_notification: Sending membership update 1552 to 2 children
Jul 05 15:27:00 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jul 05 15:27:00 corosync [CPG   ] chosen downlist: sender r(0) ip(10.88.0.1) ; members(old:2 left:1)
Jul 05 15:27:00 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Jul 05 15:27:00 Malastare crmd: [10692]: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=check_dead_member ]
Jul 05 15:27:00 Malastare crmd: [10692]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jul 05 15:27:00 Malastare crmd: [10692]: info: do_te_control: Registering TE UUID: 556ec3cf-e89e-465b-b685-0b134c3eff58
Jul 05 15:27:00 Malastare crmd: [10692]: info: set_graph_functions: Setting custom graph functions
Jul 05 15:27:00 Malastare crmd: [10692]: info: do_dc_takeover: Taking over DC status for this partition
Jul 05 15:27:00 Malastare cib: [10687]: info: cib_process_readwrite: We are now in R/W mode
Jul 05 15:27:00 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/48, version=0.401.11): ok (rc=0)
Jul 05 15:27:00 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/49, version=0.401.12): ok (rc=0)
Jul 05 15:27:00 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/51, version=0.401.13): ok (rc=0)
Jul 05 15:27:00 Malastare crmd: [10692]: info: join_make_offer: Making join offers based on membership 1552
Jul 05 15:27:00 Malastare crmd: [10692]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jul 05 15:27:00 Malastare crmd: [10692]: info: ais_dispatch_message: Membership 1552: quorum still lost
Jul 05 15:27:00 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/53, version=0.401.14): ok (rc=0)
Jul 05 15:27:00 Malastare crmd: [10692]: info: crmd_ais_dispatch: Setting expected votes to 2
Jul 05 15:27:00 Malastare crmd: [10692]: info: update_dc: Set DC to Malastare (3.0.6)
Jul 05 15:27:00 Malastare crmd: [10692]: info: ais_dispatch_message: Membership 1552: quorum still lost
Jul 05 15:27:00 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/56, version=0.401.15): ok (rc=0)
Jul 05 15:27:00 Malastare crmd: [10692]: info: crmd_ais_dispatch: Setting expected votes to 2
Jul 05 15:27:00 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/59, version=0.401.16): ok (rc=0)
Jul 05 15:27:00 Malastare crmd: [10692]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jul 05 15:27:00 Malastare crmd: [10692]: info: do_dc_join_finalize: join-1: Syncing the CIB from Malastare to the rest of the cluster
Jul 05 15:27:00 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/60, version=0.401.16): ok (rc=0)
Jul 05 15:27:00 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/61, version=0.401.17): ok (rc=0)
Jul 05 15:27:00 Malastare lrmd: [10689]: WARN: G_SIG_dispatch: Dispatch function for SIGCHLD was delayed 560 ms (> 100 ms) before being called (GSource: 0x618ee0)
Jul 05 15:27:00 Malastare lrmd: [10689]: info: G_SIG_dispatch: started at 1718190281 should have started at 1718190225
Jul 05 15:27:00 Malastare crmd: [10692]: info: do_dc_join_ack: join-1: Updating node state to member for Malastare
Jul 05 15:27:00 Malastare crmd: [10692]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Malastare']/lrm
Jul 05 15:27:00 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Malastare']/lrm (origin=local/crmd/62, version=0.401.18): ok (rc=0)
Jul 05 15:27:01 Malastare crmd: [10692]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jul 05 15:27:01 Malastare attrd: [10690]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jul 05 15:27:01 Malastare crmd: [10692]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Jul 05 15:27:01 Malastare attrd: [10690]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_www:0 (10000)
Jul 05 15:27:01 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/64, version=0.401.20): ok (rc=0)
Jul 05 15:27:01 Malastare crmd: [10692]: WARN: match_down_event: No match for shutdown action on Vindemiatrix
Jul 05 15:27:01 Malastare crmd: [10692]: info: te_update_diff: Stonith/shutdown of Vindemiatrix not matched
Jul 05 15:27:01 Malastare crmd: [10692]: info: abort_transition_graph: te_update_diff:234 - Triggered transition abort (complete=1, tag=node_state, id=Vindemiatrix, magic=NA, cib=0.401.21) : Node failure
Jul 05 15:27:01 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/66, version=0.401.22): ok (rc=0)
Jul 05 15:27:01 Malastare attrd: [10690]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_svn:0 (10000)
Jul 05 15:27:01 Malastare attrd: [10690]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: unpack_config: On loss of CCM Quorum: Ignore
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: pe_fence_node: Node Vindemiatrix will be fenced because it is un-expectedly down
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: determine_online_status: Node Vindemiatrix is unclean
Jul 05 15:27:01 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_svn:0 active in master mode on Malastare
Jul 05 15:27:01 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_www:0 active in master mode on Malastare
Jul 05 15:27:01 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_backupvi:0 active in master mode on Malastare
Jul 05 15:27:01 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_pgsql:0 active in master mode on Malastare
Jul 05 15:27:01 Malastare attrd: [10690]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_backupvi:0 (10000)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action soapi-fencing-malastare_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action fs_www_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action fs_pgsql_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action fs_svn_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action fs_backupvi_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action VirtualIP_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action OVHvIP_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Action ProFTPd_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: stage6: Scheduling Node Vindemiatrix for STONITH
Jul 05 15:27:01 Malastare attrd: [10690]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_pgsql:0 (10000)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Stop    soapi-fencing-malastare (Vindemiatrix)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Promote drbd_svn:0      (Slave -> Master Malastare)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_svn:1      (Master -> Stopped Vindemiatrix)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Promote drbd_pgsql:0    (Slave -> Master Malastare)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_pgsql:1    (Master -> Stopped Vindemiatrix)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Promote drbd_backupvi:0 (Slave -> Master Malastare)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_backupvi:1 (Master -> Stopped Vindemiatrix)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Promote drbd_www:0      (Slave -> Master Malastare)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_www:1      (Master -> Stopped Vindemiatrix)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Move    fs_www  (Started Vindemiatrix -> Malastare)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Move    fs_pgsql        (Started Vindemiatrix -> Malastare)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Move    fs_svn  (Started Vindemiatrix -> Malastare)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Move    fs_backupvi     (Started Vindemiatrix -> Malastare)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Move    VirtualIP       (Started Vindemiatrix -> Malastare)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Move    OVHvIP  (Started Vindemiatrix -> Malastare)
Jul 05 15:27:01 Malastare pengine: [10691]: notice: LogActions: Move    ProFTPd (Started Vindemiatrix -> Malastare)
Jul 05 15:27:01 Malastare crmd: [10692]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jul 05 15:27:01 Malastare crmd: [10692]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1341494821-38) derived from /var/lib/pengine/pe-warn-1264.bz2
Jul 05 15:27:01 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 1: cancel drbd_svn:0_monitor_15000 on Malastare (local)
Jul 05 15:27:01 Malastare lrmd: [10689]: info: cancel_op: operation monitor[65] on drbd_svn:0 for client 10692, its parameters: CRM_meta_clone=[0] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] drbd_resource=[svn] CRM_meta_notify_active_uname=[ ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[drbd_svn:1 ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_slave_uname=[ ] CRM_meta_notify_start_resource=[drbd_svn:0 ] CRM_m cancelled
Jul 05 15:27:01 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 4: cancel drbd_pgsql:0_monitor_15000 on Malastare (local)
Jul 05 15:27:01 Malastare lrmd: [10689]: info: cancel_op: operation monitor[66] on drbd_pgsql:0 for client 10692, its parameters: CRM_meta_clone=[0] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] drbd_resource=[postgresql] CRM_meta_notify_active_uname=[ ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[drbd_pgsql:1 ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_slave_uname=[ ] CRM_meta_notify_start_resource=[drbd_pg cancelled
Jul 05 15:27:01 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 3: cancel drbd_backupvi:0_monitor_15000 on Malastare (local)
Jul 05 15:27:01 Malastare lrmd: [10689]: info: cancel_op: operation monitor[67] on drbd_backupvi:0 for client 10692, its parameters: CRM_meta_clone=[0] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] drbd_resource=[backupvi] CRM_meta_notify_active_uname=[ ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[drbd_backupvi:1 ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_slave_uname=[ ] CRM_meta_notify_start_resource=[drb cancelled
Jul 05 15:27:01 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 2: cancel drbd_www:0_monitor_15000 on Malastare (local)
Jul 05 15:27:01 Malastare lrmd: [10689]: info: cancel_op: operation monitor[68] on drbd_www:0 for client 10692, its parameters: CRM_meta_clone=[0] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] drbd_resource=[www] CRM_meta_notify_active_uname=[ ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[drbd_www:1 ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_slave_uname=[ ] CRM_meta_notify_start_resource=[drbd_www:0 ] CRM_m cancelled
Jul 05 15:27:01 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_svn:0_monitor_15000 (call=65, status=1, cib-update=0, confirmed=true) Cancelled
Jul 05 15:27:01 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_pgsql:0_monitor_15000 (call=66, status=1, cib-update=0, confirmed=true) Cancelled
Jul 05 15:27:01 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_backupvi:0_monitor_15000 (call=67, status=1, cib-update=0, confirmed=true) Cancelled
Jul 05 15:27:01 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_www:0_monitor_15000 (call=68, status=1, cib-update=0, confirmed=true) Cancelled
Jul 05 15:27:01 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 175: notify drbd_svn:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:27:01 Malastare lrmd: [10689]: info: rsc:drbd_svn:0 notify[69] (pid 13374)
Jul 05 15:27:01 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 182: notify drbd_pgsql:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:27:01 Malastare lrmd: [10689]: info: rsc:drbd_pgsql:0 notify[70] (pid 13375)
Jul 05 15:27:01 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 189: notify drbd_backupvi:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:27:01 Malastare lrmd: [10689]: info: rsc:drbd_backupvi:0 notify[71] (pid 13376)
Jul 05 15:27:01 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 196: notify drbd_www:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:27:01 Malastare lrmd: [10689]: info: rsc:drbd_www:0 notify[72] (pid 13379)
Jul 05 15:27:01 Malastare crmd: [10692]: notice: te_fence_node: Executing reboot fencing operation (148) on Vindemiatrix (timeout=60000)
Jul 05 15:27:01 Malastare stonith-ng: [10688]: info: initiate_remote_stonith_op: Initiating remote operation reboot for Vindemiatrix: 5c571cf9-3312-42d5-9ebb-c86e9c7013da
Jul 05 15:27:01 Malastare lrmd: [10689]: info: operation notify[69] on drbd_svn:0 for client 10692: pid 13374 exited with return code 0
Jul 05 15:27:01 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_svn:0_notify_0 (call=69, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:27:01 Malastare lrmd: [10689]: info: operation notify[71] on drbd_backupvi:0 for client 10692: pid 13376 exited with return code 0
Jul 05 15:27:01 Malastare lrmd: [10689]: info: operation notify[70] on drbd_pgsql:0 for client 10692: pid 13375 exited with return code 0
Jul 05 15:27:01 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_backupvi:0_notify_0 (call=71, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:27:01 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_pgsql:0_notify_0 (call=70, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:27:01 Malastare lrmd: [10689]: info: operation notify[72] on drbd_www:0 for client 10692: pid 13379 exited with return code 0
Jul 05 15:27:01 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_www:0_notify_0 (call=72, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:27:01 Malastare pengine: [10691]: WARN: process_pe_message: Transition 0: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-1264.bz2
Jul 05 15:27:01 Malastare pengine: [10691]: notice: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jul 05 15:27:01 Malastare stonith-ng: [10688]: info: can_fence_host_with_device: Refreshing port list for soapi-fencing-vindemiatrix
Jul 05 15:27:01 Malastare stonith-ng: [10688]: info: can_fence_host_with_device: soapi-fencing-vindemiatrix can fence Vindemiatrix: dynamic-list
Jul 05 15:27:01 Malastare stonith-ng: [10688]: info: call_remote_stonith: Requesting that Malastare perform op reboot Vindemiatrix
Jul 05 15:27:01 Malastare stonith-ng: [10688]: info: can_fence_host_with_device: soapi-fencing-vindemiatrix can fence Vindemiatrix: dynamic-list
Jul 05 15:27:01 Malastare stonith-ng: [10688]: info: stonith_fence: Found 1 matching devices for 'Vindemiatrix'
Jul 05 15:27:01 Malastare stonith-ng: [10688]: info: stonith_command: Processed st_fence from Malastare: rc=-1
Jul 05 15:28:07 Malastare crmd: [10692]: notice: tengine_stonith_callback: Stonith operation 2 for Vindemiatrix failed (Operation timed out): aborting transition.
Jul 05 15:28:07 Malastare crmd: [10692]: info: abort_transition_graph: tengine_stonith_callback:454 - Triggered transition abort (complete=0) : Stonith failed
Jul 05 15:28:07 Malastare crmd: [10692]: notice: run_graph: ==== Transition 0 (Complete=23, Pending=0, Fired=0, Skipped=65, Incomplete=52, Source=/var/lib/pengine/pe-warn-1264.bz2): Stopped
Jul 05 15:28:07 Malastare crmd: [10692]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jul 05 15:28:07 Malastare pengine: [10691]: notice: unpack_config: On loss of CCM Quorum: Ignore
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: pe_fence_node: Node Vindemiatrix will be fenced because it is un-expectedly down
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: determine_online_status: Node Vindemiatrix is unclean
Jul 05 15:28:07 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_svn:0 active in master mode on Malastare
Jul 05 15:28:07 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_www:0 active in master mode on Malastare
Jul 05 15:28:07 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_backupvi:0 active in master mode on Malastare
Jul 05 15:28:07 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_pgsql:0 active in master mode on Malastare
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action soapi-fencing-malastare_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action fs_www_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action fs_pgsql_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action fs_svn_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action fs_backupvi_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action VirtualIP_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action OVHvIP_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Action ProFTPd_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: stage6: Scheduling Node Vindemiatrix for STONITH
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Stop    soapi-fencing-malastare (Vindemiatrix)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Promote drbd_svn:0      (Slave -> Master Malastare)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_svn:1      (Master -> Stopped Vindemiatrix)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Promote drbd_pgsql:0    (Slave -> Master Malastare)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_pgsql:1    (Master -> Stopped Vindemiatrix)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Promote drbd_backupvi:0 (Slave -> Master Malastare)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_backupvi:1 (Master -> Stopped Vindemiatrix)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Promote drbd_www:0      (Slave -> Master Malastare)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_www:1      (Master -> Stopped Vindemiatrix)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Move    fs_www  (Started Vindemiatrix -> Malastare)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Move    fs_pgsql        (Started Vindemiatrix -> Malastare)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Move    fs_svn  (Started Vindemiatrix -> Malastare)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Move    fs_backupvi     (Started Vindemiatrix -> Malastare)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Move    VirtualIP       (Started Vindemiatrix -> Malastare)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Move    OVHvIP  (Started Vindemiatrix -> Malastare)
Jul 05 15:28:07 Malastare pengine: [10691]: notice: LogActions: Move    ProFTPd (Started Vindemiatrix -> Malastare)
Jul 05 15:28:07 Malastare crmd: [10692]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jul 05 15:28:07 Malastare crmd: [10692]: WARN: destroy_action: Cancelling timer for action 1 (src=75)
Jul 05 15:28:07 Malastare crmd: [10692]: WARN: destroy_action: Cancelling timer for action 4 (src=76)
Jul 05 15:28:07 Malastare crmd: [10692]: WARN: destroy_action: Cancelling timer for action 3 (src=77)
Jul 05 15:28:07 Malastare crmd: [10692]: WARN: destroy_action: Cancelling timer for action 2 (src=78)
Jul 05 15:28:07 Malastare crmd: [10692]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1341494887-55) derived from /var/lib/pengine/pe-warn-1265.bz2
Jul 05 15:28:07 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 1: cancel drbd_svn:0_monitor_15000 on Malastare (local)
Jul 05 15:28:07 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 4: cancel drbd_pgsql:0_monitor_15000 on Malastare (local)
Jul 05 15:28:07 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 3: cancel drbd_backupvi:0_monitor_15000 on Malastare (local)
Jul 05 15:28:07 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 2: cancel drbd_www:0_monitor_15000 on Malastare (local)
Jul 05 15:28:07 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 175: notify drbd_svn:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:28:07 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Malastare']//lrm_resource[@id='drbd_svn:0']/lrm_rsc_op[@id='drbd_svn:0_monitor_15000'] (origin=local/crmd/74, version=0.401.32): ok (rc=0)
Jul 05 15:28:07 Malastare lrmd: [10689]: info: rsc:drbd_svn:0 notify[73] (pid 13595)
Jul 05 15:28:07 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 182: notify drbd_pgsql:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:28:07 Malastare lrmd: [10689]: info: rsc:drbd_pgsql:0 notify[74] (pid 13596)
Jul 05 15:28:07 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 189: notify drbd_backupvi:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:28:07 Malastare lrmd: [10689]: info: rsc:drbd_backupvi:0 notify[75] (pid 13597)
Jul 05 15:28:07 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 196: notify drbd_www:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:28:07 Malastare lrmd: [10689]: info: rsc:drbd_www:0 notify[76] (pid 13600)
Jul 05 15:28:07 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Malastare']//lrm_resource[@id='drbd_pgsql:0']/lrm_rsc_op[@id='drbd_pgsql:0_monitor_15000'] (origin=local/crmd/75, version=0.401.33): ok (rc=0)
Jul 05 15:28:07 Malastare crmd: [10692]: notice: te_fence_node: Executing reboot fencing operation (148) on Vindemiatrix (timeout=60000)
Jul 05 15:28:07 Malastare stonith-ng: [10688]: info: initiate_remote_stonith_op: Initiating remote operation reboot for Vindemiatrix: e38d3c52-07a8-4ddc-87f8-1bdd3d052771
Jul 05 15:28:07 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Malastare']//lrm_resource[@id='drbd_backupvi:0']/lrm_rsc_op[@id='drbd_backupvi:0_monitor_15000'] (origin=local/crmd/76, version=0.401.34): ok (rc=0)
Jul 05 15:28:07 Malastare cib: [10687]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Malastare']//lrm_resource[@id='drbd_www:0']/lrm_rsc_op[@id='drbd_www:0_monitor_15000'] (origin=local/crmd/77, version=0.401.35): ok (rc=0)
Jul 05 15:28:07 Malastare lrmd: [10689]: info: operation notify[73] on drbd_svn:0 for client 10692: pid 13595 exited with return code 0
Jul 05 15:28:07 Malastare lrmd: [10689]: info: operation notify[75] on drbd_backupvi:0 for client 10692: pid 13597 exited with return code 0
Jul 05 15:28:07 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_svn:0_notify_0 (call=73, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:28:07 Malastare lrmd: [10689]: info: operation notify[74] on drbd_pgsql:0 for client 10692: pid 13596 exited with return code 0
Jul 05 15:28:07 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_backupvi:0_notify_0 (call=75, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:28:07 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_pgsql:0_notify_0 (call=74, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:28:07 Malastare lrmd: [10689]: info: operation notify[76] on drbd_www:0 for client 10692: pid 13600 exited with return code 0
Jul 05 15:28:07 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_www:0_notify_0 (call=76, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:28:07 Malastare pengine: [10691]: WARN: process_pe_message: Transition 1: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-1265.bz2
Jul 05 15:28:07 Malastare pengine: [10691]: notice: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jul 05 15:28:07 Malastare stonith-ng: [10688]: info: can_fence_host_with_device: Refreshing port list for soapi-fencing-vindemiatrix
Jul 05 15:28:07 Malastare stonith-ng: [10688]: info: can_fence_host_with_device: soapi-fencing-vindemiatrix can fence Vindemiatrix: dynamic-list
Jul 05 15:28:07 Malastare stonith-ng: [10688]: info: call_remote_stonith: Requesting that Malastare perform op reboot Vindemiatrix
Jul 05 15:28:07 Malastare stonith-ng: [10688]: info: can_fence_host_with_device: soapi-fencing-vindemiatrix can fence Vindemiatrix: dynamic-list
Jul 05 15:28:07 Malastare stonith-ng: [10688]: info: stonith_fence: Found 1 matching devices for 'Vindemiatrix'
Jul 05 15:28:07 Malastare stonith-ng: [10688]: info: stonith_command: Processed st_fence from Malastare: rc=-1
Jul 05 15:28:13 Malastare stonith-ng: [10688]: ERROR: remote_op_done: Operation reboot of Vindemiatrix by <no-one> for Malastare[92648749-c212-46b3-8530-ddc337f22472]: Operation timed out
Jul 05 15:28:13 Malastare crmd: [10692]: WARN: stonith_perform_callback: STONITH command failed: Operation timed out
Jul 05 15:28:13 Malastare crmd: [10692]: notice: tengine_stonith_notify: Peer Vindemiatrix was not terminated (reboot) by <anyone> for Malastare: Operation timed out (ref=5c571cf9-3312-42d5-9ebb-c86e9c7013da)
Jul 05 15:29:13 Malastare crmd: [10692]: notice: tengine_stonith_callback: Stonith operation 3 for Vindemiatrix failed (Operation timed out): aborting transition.
Jul 05 15:29:13 Malastare crmd: [10692]: info: abort_transition_graph: tengine_stonith_callback:454 - Triggered transition abort (complete=0) : Stonith failed
Jul 05 15:29:13 Malastare crmd: [10692]: notice: run_graph: ==== Transition 1 (Complete=23, Pending=0, Fired=0, Skipped=65, Incomplete=52, Source=/var/lib/pengine/pe-warn-1265.bz2): Stopped
Jul 05 15:29:13 Malastare crmd: [10692]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jul 05 15:29:13 Malastare pengine: [10691]: notice: unpack_config: On loss of CCM Quorum: Ignore
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: pe_fence_node: Node Vindemiatrix will be fenced because it is un-expectedly down
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: determine_online_status: Node Vindemiatrix is unclean
Jul 05 15:29:13 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_svn:0 active in master mode on Malastare
Jul 05 15:29:13 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_www:0 active in master mode on Malastare
Jul 05 15:29:13 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_backupvi:0 active in master mode on Malastare
Jul 05 15:29:13 Malastare pengine: [10691]: notice: unpack_rsc_op: Operation monitor found resource drbd_pgsql:0 active in master mode on Malastare
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action soapi-fencing-malastare_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_svn:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_pgsql:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_backupvi:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_demote_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action drbd_www:1_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action fs_www_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action fs_pgsql_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action fs_svn_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action fs_backupvi_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action VirtualIP_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action OVHvIP_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Action ProFTPd_stop_0 on Vindemiatrix is unrunnable (offline)
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: custom_action: Marking node Vindemiatrix unclean
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: stage6: Scheduling Node Vindemiatrix for STONITH
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Stop    soapi-fencing-malastare (Vindemiatrix)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Promote drbd_svn:0      (Slave -> Master Malastare)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_svn:1      (Master -> Stopped Vindemiatrix)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Promote drbd_pgsql:0    (Slave -> Master Malastare)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_pgsql:1    (Master -> Stopped Vindemiatrix)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Promote drbd_backupvi:0 (Slave -> Master Malastare)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_backupvi:1 (Master -> Stopped Vindemiatrix)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Promote drbd_www:0      (Slave -> Master Malastare)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Demote  drbd_www:1      (Master -> Stopped Vindemiatrix)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Move    fs_www  (Started Vindemiatrix -> Malastare)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Move    fs_pgsql        (Started Vindemiatrix -> Malastare)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Move    fs_svn  (Started Vindemiatrix -> Malastare)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Move    fs_backupvi     (Started Vindemiatrix -> Malastare)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Move    VirtualIP       (Started Vindemiatrix -> Malastare)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Move    OVHvIP  (Started Vindemiatrix -> Malastare)
Jul 05 15:29:13 Malastare pengine: [10691]: notice: LogActions: Move    ProFTPd (Started Vindemiatrix -> Malastare)
Jul 05 15:29:13 Malastare crmd: [10692]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jul 05 15:29:13 Malastare crmd: [10692]: WARN: destroy_action: Cancelling timer for action 1 (src=85)
Jul 05 15:29:13 Malastare crmd: [10692]: WARN: destroy_action: Cancelling timer for action 4 (src=86)
Jul 05 15:29:13 Malastare crmd: [10692]: WARN: destroy_action: Cancelling timer for action 3 (src=87)
Jul 05 15:29:13 Malastare crmd: [10692]: WARN: destroy_action: Cancelling timer for action 2 (src=88)
Jul 05 15:29:13 Malastare crmd: [10692]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1341494953-72) derived from /var/lib/pengine/pe-warn-1266.bz2
Jul 05 15:29:13 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 171: notify drbd_svn:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:29:13 Malastare lrmd: [10689]: info: rsc:drbd_svn:0 notify[77] (pid 13748)
Jul 05 15:29:13 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 178: notify drbd_pgsql:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:29:13 Malastare lrmd: [10689]: info: rsc:drbd_pgsql:0 notify[78] (pid 13749)
Jul 05 15:29:13 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 185: notify drbd_backupvi:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:29:13 Malastare lrmd: [10689]: info: rsc:drbd_backupvi:0 notify[79] (pid 13750)
Jul 05 15:29:13 Malastare crmd: [10692]: info: te_rsc_command: Initiating action 192: notify drbd_www:0_pre_notify_demote_0 on Malastare (local)
Jul 05 15:29:13 Malastare lrmd: [10689]: info: rsc:drbd_www:0 notify[80] (pid 13753)
Jul 05 15:29:13 Malastare crmd: [10692]: notice: te_fence_node: Executing reboot fencing operation (144) on Vindemiatrix (timeout=60000)
Jul 05 15:29:13 Malastare stonith-ng: [10688]: info: initiate_remote_stonith_op: Initiating remote operation reboot for Vindemiatrix: c2ec13d0-6e3a-4ae6-bb74-b7da9a737931
Jul 05 15:29:13 Malastare lrmd: [10689]: info: operation notify[79] on drbd_backupvi:0 for client 10692: pid 13750 exited with return code 0
Jul 05 15:29:13 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_backupvi:0_notify_0 (call=79, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:29:13 Malastare lrmd: [10689]: info: operation notify[77] on drbd_svn:0 for client 10692: pid 13748 exited with return code 0
Jul 05 15:29:13 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_svn:0_notify_0 (call=77, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:29:13 Malastare lrmd: [10689]: info: operation notify[80] on drbd_www:0 for client 10692: pid 13753 exited with return code 0
Jul 05 15:29:13 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_www:0_notify_0 (call=80, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:29:13 Malastare lrmd: [10689]: info: operation notify[78] on drbd_pgsql:0 for client 10692: pid 13749 exited with return code 0
Jul 05 15:29:13 Malastare crmd: [10692]: info: process_lrm_event: LRM operation drbd_pgsql:0_notify_0 (call=78, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:29:13 Malastare pengine: [10691]: WARN: process_pe_message: Transition 2: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-1266.bz2
Jul 05 15:29:13 Malastare pengine: [10691]: notice: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jul 05 15:29:13 Malastare stonith-ng: [10688]: info: can_fence_host_with_device: Refreshing port list for soapi-fencing-vindemiatrix
Jul 05 15:29:13 Malastare stonith-ng: [10688]: info: can_fence_host_with_device: soapi-fencing-vindemiatrix can fence Vindemiatrix: dynamic-list
Jul 05 15:29:13 Malastare stonith-ng: [10688]: info: call_remote_stonith: Requesting that Malastare perform op reboot Vindemiatrix
Jul 05 15:29:13 Malastare stonith-ng: [10688]: info: can_fence_host_with_device: soapi-fencing-vindemiatrix can fence Vindemiatrix: dynamic-list
Jul 05 15:29:13 Malastare stonith-ng: [10688]: info: stonith_fence: Found 1 matching devices for 'Vindemiatrix'
Jul 05 15:29:13 Malastare stonith-ng: [10688]: info: stonith_command: Processed st_fence from Malastare: rc=-1
Jul 05 15:29:19 Malastare stonith-ng: [10688]: ERROR: remote_op_done: Operation reboot of Vindemiatrix by <no-one> for Malastare[92648749-c212-46b3-8530-ddc337f22472]: Operation timed out
Jul 05 15:29:19 Malastare crmd: [10692]: WARN: stonith_perform_callback: STONITH command failed: Operation timed out
Jul 05 15:29:19 Malastare crmd: [10692]: notice: tengine_stonith_notify: Peer Vindemiatrix was not terminated (reboot) by <anyone> for Malastare: Operation timed out (ref=e38d3c52-07a8-4ddc-87f8-1bdd3d052771)
Jul 05 15:30:00 corosync [TOTEM ] The network interface is down.
Jul 05 15:30:01 corosync [TOTEM ] The network interface [10.88.0.1] is now up.
Jul 05 15:30:05 corosync [SERV  ] Unloading all Corosync service engines.
Jul 05 15:30:05 corosync [pcmk  ] notice: pcmk_shutdown: Preventing Corosync shutdown.  Please ensure Pacemaker is stopped first.
Jul 05 15:30:05 corosync [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 1560: memb=1, new=0, lost=0
Jul 05 15:30:05 corosync [pcmk  ] info: pcmk_peer_update: memb: Malastare 16799754
Jul 05 15:30:05 corosync [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 1560: memb=1, new=0, lost=0
Jul 05 15:30:05 corosync [pcmk  ] info: pcmk_peer_update: MEMB: Malastare 16799754
Jul 05 15:30:05 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jul 05 15:30:05 corosync [CPG   ] chosen downlist: sender r(0) ip(10.88.0.1) ; members(old:1 left:0)
Jul 05 15:30:05 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Jul 05 15:30:06 Malastare pacemakerd: [10683]: ERROR: cfg_connection_destroy: Connection destroyed
Jul 05 15:30:06 Malastare attrd: [10690]: ERROR: ais_dispatch: Receiving message body failed: (2) Library error: Resource temporarily unavailable (11)
Jul 05 15:30:06 Malastare crmd: [10692]: ERROR: ais_dispatch: Receiving message body failed: (2) Library error: Resource temporarily unavailable (11)
Jul 05 15:30:06 Malastare stonith-ng: [10688]: ERROR: ais_dispatch: Receiving message body failed: (2) Library error: Resource temporarily unavailable (11)
Jul 05 15:30:06 Malastare crmd: [10692]: ERROR: ais_dispatch: AIS connection failed: 0x6b8b456700000000
Jul 05 15:30:06 Malastare attrd: [10690]: ERROR: ais_dispatch: AIS connection failed: 0x6b8b456700000000
Jul 05 15:30:06 Malastare pacemakerd: [10683]: ERROR: cpg_connection_destroy: Connection destroyed
Jul 05 15:30:06 Malastare crmd: [10692]: info: crmd_ais_destroy: connection closed
Jul 05 15:30:06 Malastare stonith-ng: [10688]: ERROR: ais_dispatch: AIS connection failed: 0x6b8b456700000000
Jul 05 15:30:06 Malastare pacemakerd: [10683]: notice: pcmk_shutdown_worker: Shuting down Pacemaker
Jul 05 15:30:06 Malastare attrd: [10690]: CRIT: attrd_ais_destroy: Lost connection to OpenAIS service!
Jul 05 15:30:06 Malastare pacemakerd: [10683]: notice: stop_child: Stopping crmd: Sent -15 to process 10692
Jul 05 15:30:06 Malastare stonith-ng: [10688]: ERROR: stonith_peer_ais_destroy: AIS connection terminated
Jul 05 15:30:06 Malastare attrd: [10690]: notice: main: Exiting...
Jul 05 15:30:06 Malastare attrd: [10690]: ERROR: attrd_cib_connection_destroy: Connection to the CIB terminated...
Jul 05 15:30:06 Malastare cib: [10687]: ERROR: ais_dispatch: Receiving message body failed: (2) Library error: Resource temporarily unavailable (11)
Jul 05 15:30:06 Malastare cib: [10687]: ERROR: ais_dispatch: AIS connection failed: 0x5962206100000000
Jul 05 15:30:06 Malastare cib: [10687]: ERROR: cib_ais_destroy: Corosync connection lost!  Exiting.
Jul 05 15:30:06 Malastare cib: [10687]: info: terminate_cib: cib_ais_destroy: Exiting...
Jul 05 15:30:06 Malastare pacemakerd: [10683]: ERROR: pcmk_child_exit: Child process stonith-ng exited (pid=10688, rc=1)
Jul 05 15:30:06 Malastare crmd: [10692]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jul 05 15:30:06 Malastare crmd: [10692]: notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
Jul 05 15:30:06 Malastare crmd: [10692]: WARN: do_log: FSA: Input I_SHUTDOWN from crm_shutdown() received in state S_TRANSITION_ENGINE
Jul 05 15:30:06 Malastare pacemakerd: [10683]: WARN: send_ipc_message: IPC Channel to 10690 is not connected
Jul 05 15:30:06 Malastare crmd: [10692]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ]
Jul 05 15:30:06 Malastare crmd: [10692]: info: do_shutdown_req: Sending shutdown request to Malastare
Jul 05 15:30:06 Malastare pacemakerd: [10683]: ERROR: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
Jul 05 15:30:06 Malastare pacemakerd: [10683]: ERROR: pcmk_child_exit: Child process attrd exited (pid=10690, rc=1)
Jul 05 15:30:06 Malastare pacemakerd: [10683]: ERROR: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
Jul 05 15:30:06 Malastare pacemakerd: [10683]: ERROR: pcmk_child_exit: Child process cib exited (pid=10687, rc=1)
Jul 05 15:30:06 Malastare pacemakerd: [10683]: WARN: send_ipc_message: IPC Channel to 10687 is not connected
Jul 05 15:30:06 Malastare pacemakerd: [10683]: ERROR: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
Jul 05 15:30:07 Malastare crmd: [10692]: ERROR: send_ais_text: Sending message 42 via pcmk: FAILED (rc=2): Library error: Connection timed out (110)
Jul 05 15:30:07 Malastare crmd: [10692]: ERROR: do_log: FSA: Input I_ERROR from do_shutdown_req() received in state S_POLICY_ENGINE
Jul 05 15:30:07 Malastare crmd: [10692]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_RECOVERY [ input=I_ERROR cause=C_FSA_INTERNAL origin=do_shutdown_req ]
Jul 05 15:30:07 Malastare crmd: [10692]: ERROR: do_recover: Action A_RECOVER (0000000001000000) not supported
Jul 05 15:30:07 Malastare crmd: [10692]: WARN: do_election_vote: Not voting in election, we're in state S_RECOVERY
Jul 05 15:30:07 Malastare crmd: [10692]: info: do_dc_release: DC role released
Jul 05 15:30:07 Malastare crmd: [10692]: info: pe_connection_destroy: Connection to the Policy Engine released
Jul 05 15:30:07 Malastare crmd: [10692]: ERROR: send_ipc_message: IPC Channel to 10687 is not connected
Jul 05 15:30:07 Malastare crmd: [10692]: info: do_te_control: Transitioner is now inactive
Jul 05 15:30:07 Malastare crmd: [10692]: ERROR: do_log: FSA: Input I_TERMINATE from do_recover() received in state S_RECOVERY
Jul 05 15:30:07 Malastare crmd: [10692]: notice: do_state_transition: State transition S_RECOVERY -> S_TERMINATE [ input=I_TERMINATE cause=C_FSA_INTERNAL origin=do_recover ]
Jul 05 15:30:07 Malastare crmd: [10692]: info: do_shutdown: Disconnecting STONITH...
Jul 05 15:30:07 Malastare crmd: [10692]: info: tengine_stonith_connection_destroy: Fencing daemon disconnected
Jul 05 15:30:07 Malastare crmd: [10692]: ERROR: verify_stopped: Resource drbd_svn:0 was active at shutdown.  You may ignore this error if it is unmanaged.
Jul 05 15:30:07 Malastare crmd: [10692]: ERROR: verify_stopped: Resource drbd_www:0 was active at shutdown.  You may ignore this error if it is unmanaged.
Jul 05 15:30:07 Malastare crmd: [10692]: ERROR: verify_stopped: Resource drbd_backupvi:0 was active at shutdown.  You may ignore this error if it is unmanaged.
Jul 05 15:30:07 Malastare crmd: [10692]: ERROR: verify_stopped: Resource soapi-fencing-vindemiatrix was active at shutdown.  You may ignore this error if it is unmanaged.
Jul 05 15:30:07 Malastare crmd: [10692]: ERROR: verify_stopped: Resource drbd_pgsql:0 was active at shutdown.  You may ignore this error if it is unmanaged.
Jul 05 15:30:07 Malastare crmd: [10692]: info: do_lrm_control: Disconnected from the LRM
Jul 05 15:30:07 Malastare crmd: [10692]: notice: terminate_ais_connection: Disconnecting from Corosync
Jul 05 15:30:08 Malastare crmd: [10692]: info: do_ha_control: Disconnected from OpenAIS
Jul 05 15:30:08 Malastare crmd: [10692]: info: do_cib_control: Disconnecting CIB
Jul 05 15:30:08 Malastare crmd: [10692]: ERROR: send_ipc_message: IPC Channel to 10687 is not connected
Jul 05 15:30:08 Malastare crmd: [10692]: ERROR: send_ipc_message: IPC Channel to 10687 is not connected
Jul 05 15:30:08 Malastare crmd: [10692]: ERROR: cib_native_perform_op_delegate: Sending message to CIB service FAILED
Jul 05 15:30:08 Malastare crmd: [10692]: info: crmd_cib_connection_destroy: Connection to the CIB terminated...
Jul 05 15:30:08 Malastare crmd: [10692]: ERROR: verify_stopped: Resource drbd_svn:0 was active at shutdown.  You may ignore this error if it is unmanaged.
Jul 05 15:30:08 Malastare crmd: [10692]: ERROR: verify_stopped: Resource drbd_www:0 was active at shutdown.  You may ignore this error if it is unmanaged.
Jul 05 15:30:08 Malastare crmd: [10692]: ERROR: verify_stopped: Resource drbd_backupvi:0 was active at shutdown.  You may ignore this error if it is unmanaged.
Jul 05 15:30:08 Malastare crmd: [10692]: ERROR: verify_stopped: Resource soapi-fencing-vindemiatrix was active at shutdown.  You may ignore this error if it is unmanaged.
Jul 05 15:30:08 Malastare crmd: [10692]: ERROR: verify_stopped: Resource drbd_pgsql:0 was active at shutdown.  You may ignore this error if it is unmanaged.
Jul 05 15:30:08 Malastare crmd: [10692]: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
Jul 05 15:30:08 Malastare crmd: [10692]: ERROR: do_exit: Could not recover from internal error
Jul 05 15:30:08 Malastare crmd: [10692]: info: free_mem: Dropping I_PENDING: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_election_vote ]
Jul 05 15:30:08 Malastare crmd: [10692]: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_dc_release ]
Jul 05 15:30:08 Malastare crmd: [10692]: info: free_mem: Dropping I_TERMINATE: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_stop ]
Jul 05 15:30:08 Malastare crmd: [10692]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Jul 05 15:30:08 Malastare crmd: [10692]: info: do_exit: [crmd] stopped (2)
Jul 05 15:30:08 Malastare pacemakerd: [10683]: ERROR: pcmk_child_exit: Child process crmd exited (pid=10692, rc=2)
Jul 05 15:30:08 Malastare pacemakerd: [10683]: WARN: send_ipc_message: IPC Channel to 10692 is not connected
Jul 05 15:30:08 Malastare pacemakerd: [10683]: ERROR: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
Jul 05 15:30:08 Malastare pacemakerd: [10683]: notice: stop_child: Stopping pengine: Sent -15 to process 10691
Jul 05 15:30:08 Malastare pacemakerd: [10683]: info: pcmk_child_exit: Child process pengine exited (pid=10691, rc=0)
Jul 05 15:30:08 Malastare pacemakerd: [10683]: ERROR: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
Jul 05 15:30:08 Malastare pacemakerd: [10683]: notice: stop_child: Stopping lrmd: Sent -15 to process 10689
Jul 05 15:30:08 Malastare lrmd: [10689]: info: lrmd is shutting down
Jul 05 15:30:08 Malastare pacemakerd: [10683]: info: pcmk_child_exit: Child process lrmd exited (pid=10689, rc=0)
Jul 05 15:30:08 Malastare pacemakerd: [10683]: ERROR: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
Jul 05 15:30:08 Malastare pacemakerd: [10683]: notice: pcmk_shutdown_worker: Shutdown complete
Jul 05 15:30:08 Malastare pacemakerd: [10683]: info: main: Exiting pacemakerd
Jul 05 15:30:11 corosync [MAIN  ] Corosync Cluster Engine ('1.4.2'): started and ready to provide service.
Jul 05 15:30:11 corosync [MAIN  ] Corosync built-in features: nss
Jul 05 15:30:11 corosync [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
Jul 05 15:30:11 corosync [TOTEM ] Initializing transport (UDP/IP Unicast).
Jul 05 15:30:11 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Set r/w permissions for uid=0, gid=0 on /var/log/corosync.log
Jul 05 15:30:11 corosync [TOTEM ] The network interface [10.88.0.1] is now up.
Jul 05 15:30:11 corosync [pcmk  ] info: process_ais_conf: Reading configure
Jul 05 15:30:11 corosync [pcmk  ] info: config_find_init: Local handle: 4730966301143465987 for logging
Jul 05 15:30:11 corosync [pcmk  ] info: config_find_next: Processing additional logging options...
Jul 05 15:30:11 corosync [pcmk  ] info: get_config_opt: Found 'off' for option: debug
Jul 05 15:30:11 corosync [pcmk  ] info: get_config_opt: Found 'yes' for option: to_logfile
Jul 05 15:30:11 corosync [pcmk  ] info: get_config_opt: Found '/var/log/corosync.log' for option: logfile
Jul 05 15:30:11 corosync [pcmk  ] info: get_config_opt: Found 'yes' for option: to_syslog
Jul 05 15:30:11 corosync [pcmk  ] info: get_config_opt: Defaulting to 'daemon' for option: syslog_facility
Jul 05 15:30:11 corosync [pcmk  ] info: config_find_init: Local handle: 7739444317642555396 for quorum
Jul 05 15:30:11 corosync [pcmk  ] info: config_find_next: No additional configuration supplied for: quorum
Jul 05 15:30:11 corosync [pcmk  ] info: get_config_opt: No default for option: provider
Jul 05 15:30:11 corosync [pcmk  ] info: config_find_init: Local handle: 5650605097994944517 for service
Jul 05 15:30:11 corosync [pcmk  ] info: config_find_next: Processing additional service options...
Jul 05 15:30:11 corosync [pcmk  ] info: get_config_opt: Found '1' for option: ver
Jul 05 15:30:11 corosync [pcmk  ] info: process_ais_conf: Enabling MCP mode: Use the Pacemaker init script to complete Pacemaker startup
Jul 05 15:30:11 corosync [pcmk  ] info: get_config_opt: Defaulting to 'pcmk' for option: clustername
Jul 05 15:30:11 corosync [pcmk  ] info: get_config_opt: Found 'no' in ENV for option: use_logd
Jul 05 15:30:11 corosync [pcmk  ] info: get_config_opt: Defaulting to 'no' for option: use_mgmtd
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_startup: CRM: Initialized
Jul 05 15:30:11 corosync [pcmk  ] Logging: Initialized pcmk_startup
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_startup: Service: 9
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_startup: Local hostname: Malastare
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_update_nodeid: Local node id: 16799754
Jul 05 15:30:11 corosync [pcmk  ] info: update_member: Creating entry for node 16799754 born on 0
Jul 05 15:30:11 corosync [pcmk  ] info: update_member: 0x691980 Node 16799754 now known as Malastare (was: (null))
Jul 05 15:30:11 corosync [pcmk  ] info: update_member: Node Malastare now has 1 quorum votes (was 0)
Jul 05 15:30:11 corosync [pcmk  ] info: update_member: Node 16799754/Malastare is now: member
Jul 05 15:30:11 corosync [SERV  ] Service engine loaded: Pacemaker Cluster Manager 1.1.7
Jul 05 15:30:11 corosync [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Jul 05 15:30:11 corosync [SERV  ] Service engine loaded: corosync configuration service
Jul 05 15:30:11 corosync [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Jul 05 15:30:11 corosync [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Jul 05 15:30:11 corosync [SERV  ] Service engine loaded: corosync profile loading service
Jul 05 15:30:11 corosync [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Jul 05 15:30:11 corosync [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Jul 05 15:30:11 corosync [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 1564: memb=0, new=0, lost=0
Jul 05 15:30:11 corosync [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 1564: memb=1, new=1, lost=0
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_peer_update: NEW:  Malastare 16799754
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_peer_update: MEMB: Malastare 16799754
Jul 05 15:30:11 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jul 05 15:30:11 corosync [CPG   ] chosen downlist: sender r(0) ip(10.88.0.1) ; members(old:0 left:0)
Jul 05 15:30:11 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Jul 05 15:30:11 corosync [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 1568: memb=1, new=0, lost=0
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_peer_update: memb: Malastare 16799754
Jul 05 15:30:11 corosync [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 1568: memb=2, new=1, lost=0
Jul 05 15:30:11 corosync [pcmk  ] info: update_member: Creating entry for node 33576970 born on 1568
Jul 05 15:30:11 corosync [pcmk  ] info: update_member: Node 33576970/unknown is now: member
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_peer_update: NEW:  .pending. 33576970
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_peer_update: MEMB: Malastare 16799754
Jul 05 15:30:11 corosync [pcmk  ] info: pcmk_peer_update: MEMB: .pending. 33576970
Jul 05 15:30:11 corosync [pcmk  ] info: send_member_notification: Sending membership update 1568 to 0 children
Jul 05 15:30:11 corosync [pcmk  ] info: update_member: 0x691980 Node 16799754 ((null)) born on: 1568
Jul 05 15:30:11 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jul 05 15:30:11 corosync [pcmk  ] info: update_member: 0x641c50 Node 33576970 (Vindemiatrix) born on: 1568
Jul 05 15:30:11 corosync [pcmk  ] info: update_member: 0x641c50 Node 33576970 now known as Vindemiatrix (was: (null))
Jul 05 15:30:11 corosync [pcmk  ] info: update_member: Node Vindemiatrix now has 1 quorum votes (was 0)
Jul 05 15:30:11 corosync [pcmk  ] info: send_member_notification: Sending membership update 1568 to 0 children
Jul 05 15:30:11 corosync [pcmk  ] WARN: route_ais_message: Sending message to local.crmd failed: ipc delivery failed (rc=-2)
Jul 05 15:30:11 corosync [pcmk  ] WARN: route_ais_message: Sending message to local.crmd failed: ipc delivery failed (rc=-2)
Jul 05 15:30:12 corosync [CPG   ] chosen downlist: sender r(0) ip(10.88.0.1) ; members(old:1 left:0)
Jul 05 15:30:12 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: Invoked: pacemakerd 
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: get_cluster_type: Cluster type is: 'openais'
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: read_config: Reading configure for stack: classic openais (with plugin)
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: config_find_next: Processing additional service options...
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: get_config_opt: Found 'pacemaker' for option: name
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: get_config_opt: Found '1' for option: ver
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: get_config_opt: Defaulting to 'no' for option: use_logd
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: get_config_opt: Defaulting to 'no' for option: use_mgmtd
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: config_find_next: Processing additional logging options...
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: get_config_opt: Found 'off' for option: debug
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: get_config_opt: Found '/var/log/corosync.log' for option: logfile
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: get_config_opt: Found 'yes' for option: to_logfile
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: get_config_opt: Found 'yes' for option: to_syslog
Jul 05 15:30:16 Malastare pacemakerd: [13940]: info: get_config_opt: Defaulting to 'daemon' for option: syslog_facility
Set r/w permissions for uid=106, gid=0 on /var/log/corosync.log
Jul 05 15:30:16 Malastare pacemakerd: [13942]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Jul 05 15:30:16 Malastare pacemakerd: [13942]: notice: main: Starting Pacemaker 1.1.7 (Build: ee0730e13d124c3d58f00016c3376a1de5323cff):  generated-manpages agent-manpages ncurses  heartbeat corosync-plugin snmp libesmtp
Jul 05 15:30:16 Malastare pacemakerd: [13942]: info: main: Maximum core file size is: 18446744073709551615
Jul 05 15:30:16 Malastare pacemakerd: [13942]: notice: update_node_processes: 0x60e240 Node 16799754 now known as Malastare, was: 
Jul 05 15:30:16 Malastare pacemakerd: [13942]: info: start_child: Forked child 13946 for process cib
Jul 05 15:30:16 Malastare pacemakerd: [13942]: info: start_child: Forked child 13947 for process stonith-ng
Jul 05 15:30:16 Malastare pacemakerd: [13942]: info: start_child: Forked child 13948 for process lrmd
Jul 05 15:30:16 Malastare pacemakerd: [13942]: info: start_child: Forked child 13949 for process attrd
Jul 05 15:30:16 Malastare pacemakerd: [13942]: info: start_child: Forked child 13950 for process pengine
Jul 05 15:30:16 Malastare pacemakerd: [13942]: info: start_child: Forked child 13951 for process crmd
Jul 05 15:30:16 Malastare pacemakerd: [13942]: info: main: Starting mainloop
Jul 05 15:30:16 Malastare stonith-ng: [13947]: info: Invoked: /usr/lib/pacemaker/stonithd 
Jul 05 15:30:16 Malastare cib: [13946]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jul 05 15:30:16 Malastare cib: [13946]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Jul 05 15:30:16 Malastare stonith-ng: [13947]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Jul 05 15:30:16 Malastare attrd: [13949]: info: Invoked: /usr/lib/pacemaker/attrd 
Jul 05 15:30:16 Malastare stonith-ng: [13947]: info: get_cluster_type: Cluster type is: 'openais'
Jul 05 15:30:16 Malastare stonith-ng: [13947]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
Jul 05 15:30:16 Malastare stonith-ng: [13947]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
Jul 05 15:30:16 Malastare attrd: [13949]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
Jul 05 15:30:16 Malastare cib: [13946]: info: validate_with_relaxng: Creating RNG parser context
Jul 05 15:30:16 Malastare pengine: [13950]: info: Invoked: /usr/lib/pacemaker/pengine 
Jul 05 15:30:16 Malastare lrmd: [13948]: info: enabling coredumps
Jul 05 15:30:16 Malastare lrmd: [13948]: WARN: Core dumps could be lost if multiple dumps occur.
Jul 05 15:30:16 Malastare lrmd: [13948]: WARN: Consider setting non-default value in /proc/sys/kernel/core_pattern (or equivalent) for maximum supportability
Jul 05 15:30:16 Malastare lrmd: [13948]: WARN: Consider setting /proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum supportability
Jul 05 15:30:16 Malastare lrmd: [13948]: info: Started.
Jul 05 15:30:16 Malastare crmd: [13951]: info: Invoked: /usr/lib/pacemaker/crmd 
Jul 05 15:30:16 Malastare crmd: [13951]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jul 05 15:30:16 Malastare crmd: [13951]: notice: main: CRM Git Version: ee0730e13d124c3d58f00016c3376a1de5323cff

Jul 05 15:30:16 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 0x6aa7a0 for attrd/0
Jul 05 15:30:16 Malastare stonith-ng: [13947]: info: init_ais_connection_classic: AIS connection established
Jul 05 15:30:16 Malastare attrd: [13949]: notice: main: Starting mainloop...
Jul 05 15:30:16 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 0x6a6440 for stonith-ng/0
Jul 05 15:30:16 Malastare stonith-ng: [13947]: info: get_ais_nodeid: Server details: id=16799754 uname=Malastare cname=pcmk
Jul 05 15:30:16 Malastare stonith-ng: [13947]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
Jul 05 15:30:16 Malastare stonith-ng: [13947]: info: crm_new_peer: Node Malastare now has id: 16799754
Jul 05 15:30:16 Malastare stonith-ng: [13947]: info: crm_new_peer: Node 16799754 is now known as Malastare
Jul 05 15:30:16 Malastare cib: [13946]: info: startCib: CIB Initialization completed successfully
Jul 05 15:30:16 Malastare cib: [13946]: info: get_cluster_type: Cluster type is: 'openais'
Jul 05 15:30:16 Malastare cib: [13946]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
Jul 05 15:30:16 Malastare cib: [13946]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
Jul 05 15:30:16 Malastare cib: [13946]: info: init_ais_connection_classic: AIS connection established
Jul 05 15:30:16 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 0x6b0a20 for cib/0
Jul 05 15:30:16 corosync [pcmk  ] info: pcmk_ipc: Sending membership update 1568 to cib
Jul 05 15:30:16 Malastare cib: [13946]: info: get_ais_nodeid: Server details: id=16799754 uname=Malastare cname=pcmk
Jul 05 15:30:16 Malastare cib: [13946]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
Jul 05 15:30:16 Malastare cib: [13946]: info: crm_new_peer: Node Malastare now has id: 16799754
Jul 05 15:30:16 Malastare cib: [13946]: info: crm_new_peer: Node 16799754 is now known as Malastare
Jul 05 15:30:16 Malastare cib: [13946]: info: cib_init: Starting cib mainloop
Jul 05 15:30:16 Malastare cib: [13946]: notice: ais_dispatch_message: Membership 1568: quorum acquired
Jul 05 15:30:16 Malastare cib: [13946]: info: crm_new_peer: Node Vindemiatrix now has id: 33576970
Jul 05 15:30:16 Malastare cib: [13946]: info: crm_new_peer: Node 33576970 is now known as Vindemiatrix
Jul 05 15:30:16 Malastare cib: [13946]: info: crm_update_peer: Node Vindemiatrix: id=33576970 state=member (new) addr=r(0) ip(10.88.0.2)  votes=1 born=1568 seen=1568 proc=00000000000000000000000000000000
Jul 05 15:30:16 Malastare cib: [13946]: info: crm_update_peer: Node Malastare: id=16799754 state=member (new) addr=r(0) ip(10.88.0.1)  (new) votes=1 (new) born=1568 seen=1568 proc=00000000000000000000000000000000
Jul 05 15:30:16 Malastare pacemakerd: [13942]: notice: update_node_processes: 0x60e200 Node 33576970 now known as Vindemiatrix, was: 
Jul 05 15:30:17 Malastare crmd: [13951]: info: do_cib_control: CIB connection established
Jul 05 15:30:17 Malastare crmd: [13951]: info: get_cluster_type: Cluster type is: 'openais'
Jul 05 15:30:17 Malastare crmd: [13951]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
Jul 05 15:30:17 Malastare crmd: [13951]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
Jul 05 15:30:17 Malastare stonith-ng: [13947]: notice: setup_cib: Watching for stonith topology changes
Jul 05 15:30:17 Malastare stonith-ng: [13947]: info: main: Starting stonith-ng mainloop
Jul 05 15:30:17 Malastare stonith-ng: [13947]: info: crm_new_peer: Node Vindemiatrix now has id: 33576970
Jul 05 15:30:17 Malastare stonith-ng: [13947]: info: crm_new_peer: Node 33576970 is now known as Vindemiatrix
Jul 05 15:30:17 Malastare crmd: [13951]: info: init_ais_connection_classic: AIS connection established
Jul 05 15:30:17 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 0x6b5a00 for crmd/0
Jul 05 15:30:17 corosync [pcmk  ] info: pcmk_ipc: Sending membership update 1568 to crmd
Jul 05 15:30:17 Malastare crmd: [13951]: info: get_ais_nodeid: Server details: id=16799754 uname=Malastare cname=pcmk
Jul 05 15:30:17 Malastare crmd: [13951]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
Jul 05 15:30:17 Malastare crmd: [13951]: info: crm_new_peer: Node Malastare now has id: 16799754
Jul 05 15:30:17 Malastare crmd: [13951]: info: crm_new_peer: Node 16799754 is now known as Malastare
Jul 05 15:30:17 Malastare crmd: [13951]: info: ais_status_callback: status: Malastare is now unknown
Jul 05 15:30:17 Malastare crmd: [13951]: info: do_ha_control: Connected to the cluster
Jul 05 15:30:17 Malastare crmd: [13951]: info: do_started: Delaying start, no membership data (0000000000100000)
Jul 05 15:30:17 Malastare crmd: [13951]: notice: ais_dispatch_message: Membership 1568: quorum acquired
Jul 05 15:30:17 Malastare crmd: [13951]: info: crm_new_peer: Node Vindemiatrix now has id: 33576970
Jul 05 15:30:17 Malastare crmd: [13951]: info: crm_new_peer: Node 33576970 is now known as Vindemiatrix
Jul 05 15:30:17 Malastare crmd: [13951]: info: ais_status_callback: status: Vindemiatrix is now unknown
Jul 05 15:30:17 Malastare crmd: [13951]: info: ais_status_callback: status: Vindemiatrix is now member (was unknown)
Jul 05 15:30:17 Malastare crmd: [13951]: info: crm_update_peer: Node Vindemiatrix: id=33576970 state=member (new) addr=r(0) ip(10.88.0.2)  votes=1 born=1568 seen=1568 proc=00000000000000000000000000000000
Jul 05 15:30:17 Malastare crmd: [13951]: info: ais_status_callback: status: Malastare is now member (was unknown)
Jul 05 15:30:17 Malastare crmd: [13951]: info: crm_update_peer: Node Malastare: id=16799754 state=member (new) addr=r(0) ip(10.88.0.1)  (new) votes=1 (new) born=1568 seen=1568 proc=00000000000000000000000000000000
Jul 05 15:30:17 Malastare crmd: [13951]: info: ais_dispatch_message: Membership 1568: quorum retained
Jul 05 15:30:17 Malastare crmd: [13951]: notice: crmd_peer_update: Status update: Client Vindemiatrix/crmd now has status [online] (DC=<null>)
Jul 05 15:30:17 Malastare crmd: [13951]: notice: crmd_peer_update: Status update: Client Malastare/crmd now has status [online] (DC=<null>)
Jul 05 15:30:17 Malastare crmd: [13951]: notice: do_started: The local CRM is operational
Jul 05 15:30:17 Malastare crmd: [13951]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jul 05 15:30:28 Malastare crmd: [13951]: info: do_election_count_vote: Election 2 (owner: Vindemiatrix) lost: vote from Vindemiatrix (Uptime)
Jul 05 15:30:28 Malastare crmd: [13951]: info: update_dc: Set DC to Vindemiatrix (3.0.6)
Jul 05 15:30:28 Malastare crmd: [13951]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Malastare']/transient_attributes
Jul 05 15:30:28 Malastare crmd: [13951]: info: update_attrd: Connecting to attrd...
Jul 05 15:30:28 Malastare crmd: [13951]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Jul 05 15:30:28 Malastare attrd: [13949]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jul 05 15:30:28 Malastare lrmd: [13948]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Jul 05 15:30:28 Malastare lrmd: [13948]: info: rsc:soapi-fencing-malastare probe[2] (pid 13962)
Jul 05 15:30:28 Malastare lrmd: [13948]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Jul 05 15:30:28 Malastare lrmd: [13948]: info: rsc:soapi-fencing-vindemiatrix probe[3] (pid 13963)
Jul 05 15:30:28 Malastare lrmd: [13948]: info: rsc:drbd_svn:0 probe[4] (pid 13964)
Jul 05 15:30:28 Malastare lrmd: [13948]: info: rsc:drbd_pgsql:0 probe[5] (pid 13979)
Jul 05 15:30:28 Malastare lrmd: [13948]: info: rsc:drbd_backupvi:0 probe[6] (pid 13981)
Jul 05 15:30:28 Malastare lrmd: [13948]: info: rsc:drbd_www:0 probe[7] (pid 13983)
Jul 05 15:30:28 Malastare attrd: [13949]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_svn:0 (1000)
Jul 05 15:30:28 Malastare attrd: [13949]: notice: attrd_perform_update: Sent update 4: master-drbd_svn:0=1000
Jul 05 15:30:28 Malastare lrmd: [13948]: info: operation monitor[4] on drbd_svn:0 for client 13951: pid 13964 exited with return code 0
Jul 05 15:30:28 Malastare stonith-ng: [13947]: notice: stonith_device_action: Device soapi-fencing-malastare not found
Jul 05 15:30:28 Malastare stonith-ng: [13947]: info: stonith_command: Processed st_execute from lrmd: rc=-12
Jul 05 15:30:28 Malastare attrd: [13949]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_pgsql:0 (1000)
Jul 05 15:30:28 Malastare attrd: [13949]: notice: attrd_perform_update: Sent update 7: master-drbd_pgsql:0=1000
Jul 05 15:30:28 Malastare attrd: [13949]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_backupvi:0 (1000)
Jul 05 15:30:28 Malastare attrd: [13949]: notice: attrd_perform_update: Sent update 10: master-drbd_backupvi:0=1000
Jul 05 15:30:28 Malastare attrd: [13949]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_www:0 (1000)
Jul 05 15:30:28 Malastare attrd: [13949]: notice: attrd_perform_update: Sent update 13: master-drbd_www:0=1000
Jul 05 15:30:28 Malastare lrmd: [13948]: info: operation monitor[2] on soapi-fencing-malastare for client 13951: pid 13962 exited with return code 7
Jul 05 15:30:28 Malastare lrmd: [13948]: info: operation monitor[5] on drbd_pgsql:0 for client 13951: pid 13979 exited with return code 0
Jul 05 15:30:28 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_svn:0_monitor_0 (call=4, rc=0, cib-update=8, confirmed=true) ok
Jul 05 15:30:28 Malastare lrmd: [13948]: info: operation monitor[6] on drbd_backupvi:0 for client 13951: pid 13981 exited with return code 0
Jul 05 15:30:28 Malastare lrmd: [13948]: info: operation monitor[7] on drbd_www:0 for client 13951: pid 13983 exited with return code 0
Jul 05 15:30:28 Malastare stonith-ng: [13947]: notice: stonith_device_action: Device soapi-fencing-vindemiatrix not found
Jul 05 15:30:28 Malastare stonith-ng: [13947]: info: stonith_command: Processed st_execute from lrmd: rc=-12
Jul 05 15:30:29 Malastare lrmd: [13948]: WARN: G_SIG_dispatch: Dispatch function for SIGCHLD was delayed 830 ms (> 100 ms) before being called (GSource: 0x618ee0)
Jul 05 15:30:29 Malastare lrmd: [13948]: info: G_SIG_dispatch: started at 1718211157 should have started at 1718211074
Jul 05 15:30:29 Malastare lrmd: [13948]: info: operation monitor[3] on soapi-fencing-vindemiatrix for client 13951: pid 13963 exited with return code 7
Jul 05 15:30:29 Malastare crmd: [13951]: info: process_lrm_event: LRM operation soapi-fencing-malastare_monitor_0 (call=2, rc=7, cib-update=9, confirmed=true) not running
Jul 05 15:30:29 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_pgsql:0_monitor_0 (call=5, rc=0, cib-update=10, confirmed=true) ok
Jul 05 15:30:29 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_backupvi:0_monitor_0 (call=6, rc=0, cib-update=11, confirmed=true) ok
Jul 05 15:30:29 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_www:0_monitor_0 (call=7, rc=0, cib-update=12, confirmed=true) ok
Jul 05 15:30:29 Malastare crmd: [13951]: info: process_lrm_event: LRM operation soapi-fencing-vindemiatrix_monitor_0 (call=3, rc=7, cib-update=13, confirmed=true) not running
Jul 05 15:30:29 Malastare lrmd: [13948]: info: rsc:fs_www probe[8] (pid 14088)
Jul 05 15:30:29 Malastare lrmd: [13948]: info: rsc:fs_pgsql probe[9] (pid 14089)
Jul 05 15:30:29 Malastare lrmd: [13948]: info: rsc:fs_svn probe[10] (pid 14090)
Jul 05 15:30:29 Malastare lrmd: [13948]: info: rsc:fs_backupvi probe[11] (pid 14091)
Jul 05 15:30:29 Malastare lrmd: [13948]: info: operation monitor[8] on fs_www for client 13951: pid 14088 exited with return code 7
Jul 05 15:30:29 Malastare lrmd: [13948]: info: operation monitor[9] on fs_pgsql for client 13951: pid 14089 exited with return code 7
Jul 05 15:30:29 Malastare lrmd: [13948]: info: operation monitor[10] on fs_svn for client 13951: pid 14090 exited with return code 7
Jul 05 15:30:29 Malastare lrmd: [13948]: info: operation monitor[11] on fs_backupvi for client 13951: pid 14091 exited with return code 7
Jul 05 15:30:29 Malastare crmd: [13951]: info: process_lrm_event: LRM operation fs_www_monitor_0 (call=8, rc=7, cib-update=14, confirmed=true) not running
Jul 05 15:30:29 Malastare crmd: [13951]: info: process_lrm_event: LRM operation fs_pgsql_monitor_0 (call=9, rc=7, cib-update=15, confirmed=true) not running
Jul 05 15:30:29 Malastare crmd: [13951]: info: process_lrm_event: LRM operation fs_svn_monitor_0 (call=10, rc=7, cib-update=16, confirmed=true) not running
Jul 05 15:30:29 Malastare crmd: [13951]: info: process_lrm_event: LRM operation fs_backupvi_monitor_0 (call=11, rc=7, cib-update=17, confirmed=true) not running
Jul 05 15:30:30 Malastare lrmd: [13948]: info: rsc:VirtualIP probe[12] (pid 14246)
Jul 05 15:30:30 Malastare lrmd: [13948]: info: rsc:OVHvIP probe[13] (pid 14247)
Jul 05 15:30:30 Malastare lrmd: [13948]: info: rsc:ProFTPd probe[14] (pid 14248)
Jul 05 15:30:30 Malastare lrmd: [13948]: info: operation monitor[14] on ProFTPd for client 13951: pid 14248 exited with return code 7
Jul 05 15:30:30 Malastare crmd: [13951]: info: process_lrm_event: LRM operation ProFTPd_monitor_0 (call=14, rc=7, cib-update=18, confirmed=true) not running
Jul 05 15:30:30 Malastare lrmd: [13948]: info: operation monitor[12] on VirtualIP for client 13951: pid 14246 exited with return code 7
Jul 05 15:30:30 Malastare crmd: [13951]: info: process_lrm_event: LRM operation VirtualIP_monitor_0 (call=12, rc=7, cib-update=19, confirmed=true) not running
Jul 05 15:30:31 Malastare lrmd: [13948]: info: operation monitor[13] on OVHvIP for client 13951: pid 14247 exited with return code 7
Jul 05 15:30:31 Malastare crmd: [13951]: info: process_lrm_event: LRM operation OVHvIP_monitor_0 (call=13, rc=7, cib-update=20, confirmed=true) not running
Jul 05 15:30:31 Malastare attrd: [13949]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jul 05 15:30:31 Malastare attrd: [13949]: notice: attrd_perform_update: Sent update 16: probe_complete=true
Jul 05 15:30:32 Malastare lrmd: [13948]: info: rsc:soapi-fencing-vindemiatrix start[15] (pid 14291)
Jul 05 15:30:32 Malastare lrmd: [13948]: info: rsc:ProFTPd start[16] (pid 14292)
Jul 05 15:30:32 Malastare lrmd: [13948]: info: rsc:drbd_svn:0 notify[17] (pid 14293)
Jul 05 15:30:32 Malastare lrmd: [13948]: info: rsc:drbd_pgsql:0 notify[18] (pid 14294)
Jul 05 15:30:32 Malastare lrmd: [13948]: info: rsc:drbd_backupvi:0 notify[19] (pid 14297)
Jul 05 15:30:32 Malastare lrmd: [13948]: info: RA output: (drbd_svn:0:notify:stdout) 

Jul 05 15:30:32 Malastare lrmd: [13948]: info: operation notify[17] on drbd_svn:0 for client 13951: pid 14293 exited with return code 0
Jul 05 15:30:32 Malastare lrmd: [13948]: info: RA output: (drbd_pgsql:0:notify:stdout) 

Jul 05 15:30:32 Malastare lrmd: [13948]: info: operation notify[18] on drbd_pgsql:0 for client 13951: pid 14294 exited with return code 0
Jul 05 15:30:32 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_svn:0_notify_0 (call=17, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:32 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_pgsql:0_notify_0 (call=18, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:32 Malastare lrmd: [13948]: info: RA output: (drbd_backupvi:0:notify:stdout) 

Jul 05 15:30:32 Malastare lrmd: [13948]: info: operation notify[19] on drbd_backupvi:0 for client 13951: pid 14297 exited with return code 0
Jul 05 15:30:32 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_backupvi:0_notify_0 (call=19, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:32 Malastare stonith-ng: [13947]: info: stonith_device_register: Added 'soapi-fencing-vindemiatrix' to the device list (1 active devices)
Jul 05 15:30:32 Malastare stonith-ng: [13947]: info: stonith_command: Processed st_device_register from lrmd: rc=0
Jul 05 15:30:32 Malastare stonith-ng: [13947]: info: stonith_command: Processed st_execute from lrmd: rc=-1
Jul 05 15:30:33 Malastare lrmd: [13948]: info: rsc:drbd_www:0 notify[20] (pid 14393)
Jul 05 15:30:33 Malastare lrmd: [13948]: info: RA output: (drbd_www:0:notify:stdout) drbdsetup 2 net ipv4:10.88.0.1:7791 ipv4:10.88.0.2:7791 A --set-defaults --create-device --data-integrity-alg=sha1 --after-sb-0pri=discard-younger-primary 

Jul 05 15:30:33 Malastare lrmd: [13948]: info: operation notify[20] on drbd_www:0 for client 13951: pid 14393 exited with return code 0
Jul 05 15:30:33 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_www:0_notify_0 (call=20, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:33 Malastare lrmd: [13948]: info: rsc:drbd_backupvi:0 notify[21] (pid 14429)
Jul 05 15:30:33 Malastare lrmd: [13948]: info: RA output: (drbd_backupvi:0:notify:stdout) 

Jul 05 15:30:33 Malastare lrmd: [13948]: info: operation notify[21] on drbd_backupvi:0 for client 13951: pid 14429 exited with return code 0
Jul 05 15:30:33 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_backupvi:0_notify_0 (call=21, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:33 Malastare lrmd: [13948]: info: rsc:drbd_pgsql:0 notify[22] (pid 14459)
Jul 05 15:30:33 Malastare lrmd: [13948]: info: rsc:drbd_svn:0 notify[23] (pid 14464)
Jul 05 15:30:34 Malastare lrmd: [13948]: info: RA output: (drbd_pgsql:0:notify:stdout) 

Jul 05 15:30:34 Malastare lrmd: [13948]: info: operation notify[22] on drbd_pgsql:0 for client 13951: pid 14459 exited with return code 0
Jul 05 15:30:34 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_pgsql:0_notify_0 (call=22, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:34 Malastare lrmd: [13948]: info: RA output: (drbd_svn:0:notify:stdout) 

Jul 05 15:30:34 Malastare lrmd: [13948]: info: operation notify[23] on drbd_svn:0 for client 13951: pid 14464 exited with return code 0
Jul 05 15:30:34 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_svn:0_notify_0 (call=23, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:34 Malastare lrmd: [13948]: info: operation start[16] on ProFTPd for client 13951: pid 14292 exited with return code 0
Jul 05 15:30:34 Malastare crmd: [13951]: info: process_lrm_event: LRM operation ProFTPd_start_0 (call=16, rc=0, cib-update=21, confirmed=true) ok
Jul 05 15:30:36 Malastare lrmd: [13948]: info: rsc:drbd_www:0 notify[24] (pid 14580)
Jul 05 15:30:36 Malastare lrmd: [13948]: info: RA output: (drbd_www:0:notify:stdout) 

Jul 05 15:30:36 Malastare lrmd: [13948]: info: operation notify[24] on drbd_www:0 for client 13951: pid 14580 exited with return code 0
Jul 05 15:30:36 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_www:0_notify_0 (call=24, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:37 Malastare stonith: [14388]: info: external/ovh device OK.
Jul 05 15:30:37 Malastare stonith-ng: [13947]: info: log_operation: soapi-fencing-vindemiatrix: Performing: stonith -t external/ovh -S
Jul 05 15:30:37 Malastare stonith-ng: [13947]: info: log_operation: soapi-fencing-vindemiatrix: success:  0
Jul 05 15:30:37 Malastare lrmd: [13948]: info: operation start[15] on soapi-fencing-vindemiatrix for client 13951: pid 14291 exited with return code 0
Jul 05 15:30:37 Malastare crmd: [13951]: info: process_lrm_event: LRM operation soapi-fencing-vindemiatrix_start_0 (call=15, rc=0, cib-update=22, confirmed=true) ok
Jul 05 15:30:38 Malastare lrmd: [13948]: info: rsc:ProFTPd monitor[25] (pid 14630)
Jul 05 15:30:38 Malastare lrmd: [13948]: info: rsc:drbd_svn:0 notify[26] (pid 14631)
Jul 05 15:30:38 Malastare lrmd: [13948]: info: rsc:drbd_pgsql:0 notify[27] (pid 14632)
Jul 05 15:30:38 Malastare lrmd: [13948]: info: rsc:drbd_backupvi:0 notify[28] (pid 14635)
Jul 05 15:30:38 Malastare lrmd: [13948]: info: operation monitor[25] on ProFTPd for client 13951: pid 14630 exited with return code 0
Jul 05 15:30:38 Malastare crmd: [13951]: info: process_lrm_event: LRM operation ProFTPd_monitor_60000 (call=25, rc=0, cib-update=23, confirmed=false) ok
Jul 05 15:30:38 Malastare lrmd: [13948]: info: operation notify[26] on drbd_svn:0 for client 13951: pid 14631 exited with return code 0
Jul 05 15:30:38 Malastare lrmd: [13948]: info: operation notify[27] on drbd_pgsql:0 for client 13951: pid 14632 exited with return code 0
Jul 05 15:30:38 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_svn:0_notify_0 (call=26, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:38 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_pgsql:0_notify_0 (call=27, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:38 Malastare lrmd: [13948]: info: operation notify[28] on drbd_backupvi:0 for client 13951: pid 14635 exited with return code 0
Jul 05 15:30:38 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_backupvi:0_notify_0 (call=28, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:38 Malastare lrmd: [13948]: info: rsc:drbd_svn:0 promote[30] (pid 14707)
Jul 05 15:30:38 Malastare lrmd: [13948]: info: rsc:drbd_pgsql:0 promote[31] (pid 14730)
Jul 05 15:30:39 Malastare lrmd: [13948]: info: rsc:drbd_backupvi:0 promote[32] (pid 14742)
Jul 05 15:30:39 Malastare lrmd: [13948]: info: RA output: (drbd_pgsql:0:promote:stdout) 

Jul 05 15:30:39 Malastare lrmd: [13948]: info: operation promote[31] on drbd_pgsql:0 for client 13951: pid 14730 exited with return code 0
Jul 05 15:30:39 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_pgsql:0_promote_0 (call=31, rc=0, cib-update=24, confirmed=true) ok
Jul 05 15:30:39 Malastare lrmd: [13948]: info: RA output: (drbd_svn:0:promote:stdout) 

Jul 05 15:30:39 Malastare lrmd: [13948]: info: RA output: (drbd_backupvi:0:promote:stdout) 

Jul 05 15:30:39 Malastare lrmd: [13948]: info: operation promote[30] on drbd_svn:0 for client 13951: pid 14707 exited with return code 0
Jul 05 15:30:39 Malastare lrmd: [13948]: info: operation promote[32] on drbd_backupvi:0 for client 13951: pid 14742 exited with return code 0
Jul 05 15:30:39 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_svn:0_promote_0 (call=30, rc=0, cib-update=25, confirmed=true) ok
Jul 05 15:30:39 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_backupvi:0_promote_0 (call=32, rc=0, cib-update=26, confirmed=true) ok
Jul 05 15:30:39 Malastare lrmd: [13948]: info: rsc:drbd_pgsql:0 notify[33] (pid 14795)
Jul 05 15:30:39 Malastare lrmd: [13948]: info: rsc:drbd_svn:0 notify[34] (pid 14818)
Jul 05 15:30:39 Malastare lrmd: [13948]: info: rsc:drbd_backupvi:0 notify[35] (pid 14820)
Jul 05 15:30:39 Malastare attrd: [13949]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_pgsql:0 (10000)
Jul 05 15:30:39 Malastare attrd: [13949]: notice: attrd_perform_update: Sent update 27: master-drbd_pgsql:0=10000
Jul 05 15:30:39 Malastare lrmd: [13948]: info: RA output: (drbd_pgsql:0:notify:stdout) 

Jul 05 15:30:39 Malastare lrmd: [13948]: info: operation notify[33] on drbd_pgsql:0 for client 13951: pid 14795 exited with return code 0
Jul 05 15:30:39 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_pgsql:0_notify_0 (call=33, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:39 Malastare attrd: [13949]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_svn:0 (10000)
Jul 05 15:30:39 Malastare attrd: [13949]: notice: attrd_perform_update: Sent update 29: master-drbd_svn:0=10000
Jul 05 15:30:39 Malastare lrmd: [13948]: info: RA output: (drbd_svn:0:notify:stdout) 

Jul 05 15:30:39 Malastare attrd: [13949]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_backupvi:0 (10000)
Jul 05 15:30:39 Malastare lrmd: [13948]: info: operation notify[34] on drbd_svn:0 for client 13951: pid 14818 exited with return code 0
Jul 05 15:30:39 Malastare attrd: [13949]: notice: attrd_perform_update: Sent update 31: master-drbd_backupvi:0=10000
Jul 05 15:30:39 Malastare lrmd: [13948]: info: RA output: (drbd_backupvi:0:notify:stdout) 

Jul 05 15:30:39 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_svn:0_notify_0 (call=34, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:39 Malastare lrmd: [13948]: info: operation notify[35] on drbd_backupvi:0 for client 13951: pid 14820 exited with return code 0
Jul 05 15:30:39 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_backupvi:0_notify_0 (call=35, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:39 Malastare lrmd: [13948]: info: rsc:drbd_www:0 notify[29] (pid 14900)
Jul 05 15:30:39 Malastare lrmd: [13948]: info: operation notify[29] on drbd_www:0 for client 13951: pid 14900 exited with return code 0
Jul 05 15:30:39 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_www:0_notify_0 (call=29, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:40 Malastare lrmd: [13948]: info: rsc:fs_pgsql start[36] (pid 14923)
Jul 05 15:30:40 Malastare lrmd: [13948]: info: rsc:fs_svn start[37] (pid 14924)
Jul 05 15:30:40 Malastare lrmd: [13948]: info: rsc:fs_backupvi start[38] (pid 14925)
Jul 05 15:30:40 Malastare lrmd: [13948]: info: rsc:OVHvIP start[39] (pid 14928)
external/ovh:   2012/07/05_15:30:40 INFO: Running start for /dev/drbd/by-res/postgresql on /var/lib/postgresql
external/ovh:   2012/07/05_15:30:40 INFO: Running start for /dev/drbd/by-res/svn on /var/lib/svn
external/ovh:   2012/07/05_15:30:40 INFO: Running start for /dev/drbd/by-res/backupvi on /var/backupvi
external/ovh:   2012/07/05_15:30:40 INFO: Running start for /dev/drbd/by-res/postgresql on /var/lib/postgresql
external/ovh:   2012/07/05_15:30:40 INFO: Running start for /dev/drbd/by-res/svn on /var/lib/svn
external/ovh:   2012/07/05_15:30:40 INFO: Running start for /dev/drbd/by-res/backupvi on /var/backupvi
Jul 05 15:30:40 Malastare lrmd: [13948]: info: RA output: (fs_pgsql:start:stderr) FATAL: Could not load /lib/modules/3.2.13-grsec-xxxx-grs-ipv6-64/modules.dep: No such file or directory

Jul 05 15:30:40 Malastare lrmd: [13948]: info: RA output: (fs_svn:start:stderr) FATAL: Could not load /lib/modules/3.2.13-grsec-xxxx-grs-ipv6-64/modules.dep: No such file or directory

Jul 05 15:30:40 Malastare lrmd: [13948]: info: RA output: (fs_pgsql:start:stderr) FATAL: Could not load /lib/modules/3.2.13-grsec-xxxx-grs-ipv6-64/modules.dep: No such file or directory

Jul 05 15:30:40 Malastare lrmd: [13948]: info: RA output: (fs_svn:start:stderr) FATAL: Could not load /lib/modules/3.2.13-grsec-xxxx-grs-ipv6-64/modules.dep: No such file or directory

Jul 05 15:30:40 Malastare lrmd: [13948]: info: RA output: (fs_backupvi:start:stderr) FATAL: Could not load /lib/modules/3.2.13-grsec-xxxx-grs-ipv6-64/modules.dep: No such file or directory

Jul 05 15:30:40 Malastare lrmd: [13948]: info: RA output: (fs_backupvi:start:stderr) FATAL: Could not load /lib/modules/3.2.13-grsec-xxxx-grs-ipv6-64/modules.dep: No such file or directory

Jul 05 15:30:40 Malastare lrmd: [13948]: info: operation start[37] on fs_svn for client 13951: pid 14924 exited with return code 0
Jul 05 15:30:40 Malastare crmd: [13951]: info: process_lrm_event: LRM operation fs_svn_start_0 (call=37, rc=0, cib-update=27, confirmed=true) ok
Jul 05 15:30:40 Malastare lrmd: [13948]: info: operation start[36] on fs_pgsql for client 13951: pid 14923 exited with return code 0
Jul 05 15:30:40 Malastare crmd: [13951]: info: process_lrm_event: LRM operation fs_pgsql_start_0 (call=36, rc=0, cib-update=28, confirmed=true) ok
Jul 05 15:30:40 Malastare lrmd: [13948]: info: operation start[38] on fs_backupvi for client 13951: pid 14925 exited with return code 0
Jul 05 15:30:40 Malastare crmd: [13951]: info: process_lrm_event: LRM operation fs_backupvi_start_0 (call=38, rc=0, cib-update=29, confirmed=true) ok
Jul 05 15:30:41 Malastare lrmd: [13948]: info: rsc:drbd_www:0 notify[40] (pid 15102)
Jul 05 15:30:41 Malastare lrmd: [13948]: info: operation notify[40] on drbd_www:0 for client 13951: pid 15102 exited with return code 0
Jul 05 15:30:41 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_www:0_notify_0 (call=40, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:41 Malastare lrmd: [13948]: info: rsc:drbd_www:0 promote[41] (pid 15125)
Jul 05 15:30:41 Malastare lrmd: [13948]: info: RA output: (drbd_www:0:promote:stdout) 

Jul 05 15:30:41 Malastare lrmd: [13948]: info: operation promote[41] on drbd_www:0 for client 13951: pid 15125 exited with return code 0
Jul 05 15:30:41 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_www:0_promote_0 (call=41, rc=0, cib-update=30, confirmed=true) ok
Jul 05 15:30:41 Malastare lrmd: [13948]: info: rsc:drbd_www:0 notify[42] (pid 15154)
Jul 05 15:30:41 Malastare attrd: [13949]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_www:0 (10000)
Jul 05 15:30:41 Malastare attrd: [13949]: notice: attrd_perform_update: Sent update 35: master-drbd_www:0=10000
Jul 05 15:30:41 Malastare lrmd: [13948]: info: RA output: (drbd_www:0:notify:stdout) 

Jul 05 15:30:41 Malastare lrmd: [13948]: info: operation notify[42] on drbd_www:0 for client 13951: pid 15154 exited with return code 0
Jul 05 15:30:41 Malastare crmd: [13951]: info: process_lrm_event: LRM operation drbd_www:0_notify_0 (call=42, rc=0, cib-update=0, confirmed=true) ok
Jul 05 15:30:43 Malastare lrmd: [13948]: info: operation start[39] on OVHvIP for client 13951: pid 14928 exited with return code 0
Jul 05 15:30:44 Malastare crmd: [13951]: info: process_lrm_event: LRM operation OVHvIP_start_0 (call=39, rc=0, cib-update=31, confirmed=true) ok
Jul 05 15:30:44 Malastare lrmd: [13948]: info: rsc:fs_www start[43] (pid 15191)
Jul 05 15:30:44 Malastare lrmd: [13948]: info: rsc:VirtualIP start[44] (pid 15192)
external/ovh:   2012/07/05_15:30:44 INFO: ip -f inet addr add 178.33.109.180/32 brd 178.33.109.180 dev eth0
external/ovh:   2012/07/05_15:30:44 INFO: ip -f inet addr add 178.33.109.180/32 brd 178.33.109.180 dev eth0
external/ovh:   2012/07/05_15:30:44 INFO: ip link set eth0 up
external/ovh:   2012/07/05_15:30:44 INFO: Running start for /dev/drbd/by-res/www on /var/www
external/ovh:   2012/07/05_15:30:44 INFO: ip link set eth0 up
external/ovh:   2012/07/05_15:30:44 INFO: Running start for /dev/drbd/by-res/www on /var/www
external/ovh:   2012/07/05_15:30:44 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-178.33.109.180 eth0 178.33.109.180 auto not_used not_used
external/ovh:   2012/07/05_15:30:44 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-178.33.109.180 eth0 178.33.109.180 auto not_used not_used
Jul 05 15:30:44 Malastare lrmd: [13948]: info: RA output: (fs_www:start:stderr) FATAL: Could not load /lib/modules/3.2.13-grsec-xxxx-grs-ipv6-64/modules.dep: No such file or directory

Jul 05 15:30:44 Malastare lrmd: [13948]: info: operation start[44] on VirtualIP for client 13951: pid 15192 exited with return code 0
Jul 05 15:30:44 Malastare lrmd: [13948]: info: RA output: (fs_www:start:stderr) FATAL: Could not load /lib/modules/3.2.13-grsec-xxxx-grs-ipv6-64/modules.dep: No such file or directory

Jul 05 15:30:44 Malastare crmd: [13951]: info: process_lrm_event: LRM operation VirtualIP_start_0 (call=44, rc=0, cib-update=32, confirmed=true) ok
Jul 05 15:30:44 Malastare lrmd: [13948]: info: operation start[43] on fs_www for client 13951: pid 15191 exited with return code 0
Jul 05 15:30:44 Malastare crmd: [13951]: info: process_lrm_event: LRM operation fs_www_start_0 (call=43, rc=0, cib-update=33, confirmed=true) ok



More information about the Pacemaker mailing list