[Pacemaker] Bringing node out of standby, gets stonith'd

Tony Atkinson tony.atkinson at dataproservices.co.uk
Wed Jul 16 13:23:18 CEST 2014


Hi,
I'm getting some weirdness from nodes coming back from standby, and was 
wondering if anyone could have a look over my config to see if there's 
any obvious errors?

2-node active/active cluster serving virtual machines (libvirt/KVM)
Storage is via DRBD

When I bring a node out of standby it doesn't come back properly and 
gets stonith'ed

*********************************

Normal operation
vm-a & vm-b up and running
Resources are distributed across them both

*********************************

Online: [ vm-a vm-b ]

Full list of resources:

  cluster-ip     (ocf::heartbeat:IPaddr2):       Started vm-a
  stonith_external_aten_vm-a (stonith:external/aten-pe-snmp):        
Started vm-b
  stonith_external_aten_vm-b (stonith:external/aten-pe-snmp):        
Started vm-a
  vm_test1       (ocf::heartbeat:VirtualDomain): Started vm-b
  Master/Slave Set: ms_drbd_r0 [p_drbd_r0]
      Masters: [ vm-a vm-b ]
  Master/Slave Set: ms_drbd_r1 [p_drbd_r1]
      Masters: [ vm-a vm-b ]
  Clone Set: cl_clvm [p_clvm]
      Started: [ vm-a vm-b ]
  Clone Set: cl_dlm [p_dlm]
      Started: [ vm-a vm-b ]
  Clone Set: cl_fs_r0 [p_fs_r0]
      Started: [ vm-a vm-b ]
  Clone Set: cl_libvirt [p_libvirt]
      Started: [ vm-a vm-b ]
  Clone Set: cl_lvm_vm [p_lvm_vm]
      Started: [ vm-a vm-b ]
  vm_test2       (ocf::heartbeat:VirtualDomain): Started vm-a
  vm_test3       (ocf::heartbeat:VirtualDomain): Started vm-b
  vm_test4       (ocf::heartbeat:VirtualDomain): Started vm-a

*********************************

Put vm-b into standby

# crm node standby vm-b

All ok

*********************************

Node vm-b (168440322): standby
Online: [ vm-a ]

Full list of resources:

  cluster-ip     (ocf::heartbeat:IPaddr2):       Started vm-a
  stonith_external_aten_vm-a (stonith:external/aten-pe-snmp):        Stopped
  stonith_external_aten_vm-b (stonith:external/aten-pe-snmp):        
Started vm-a
  vm_test1       (ocf::heartbeat:VirtualDomain): Started vm-a
  Master/Slave Set: ms_drbd_r0 [p_drbd_r0]
      Masters: [ vm-a ]
      Stopped: [ vm-b ]
  Master/Slave Set: ms_drbd_r1 [p_drbd_r1]
      Masters: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_clvm [p_clvm]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_dlm [p_dlm]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_fs_r0 [p_fs_r0]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_libvirt [p_libvirt]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_lvm_vm [p_lvm_vm]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  vm_test2       (ocf::heartbeat:VirtualDomain): Started vm-a
  vm_test3       (ocf::heartbeat:VirtualDomain): Started vm-a
  vm_test4       (ocf::heartbeat:VirtualDomain): Started vm-a

*********************************

Bring vm-b back online

# crm node online vm-b

Node half-comes back online
DLM starts ok on vm-b, but after that, nothing...
It sits there and resources begin to time-out

Cloned resources fail on vm-b
Master/slave resources stuck in slave-mode

*********************************
Node vm-b (168440322): UNCLEAN (online)
Online: [ vm-a ]

Full list of resources:

  cluster-ip     (ocf::heartbeat:IPaddr2):       Started vm-a
  stonith_external_aten_vm-a (stonith:external/aten-pe-snmp):        
Started vm-b FAILED
  stonith_external_aten_vm-b (stonith:external/aten-pe-snmp):        
Started vm-a
  vm_test1       (ocf::heartbeat:VirtualDomain): Started vm-a
  Master/Slave Set: ms_drbd_r0 [p_drbd_r0]
      Masters: [ vm-a ]
      Slaves: [ vm-b ]
  Master/Slave Set: ms_drbd_r1 [p_drbd_r1]
      Masters: [ vm-a ]
      Slaves: [ vm-b ]
  Clone Set: cl_clvm [p_clvm]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_dlm [p_dlm]
      Started: [ vm-a vm-b ]
  Clone Set: cl_fs_r0 [p_fs_r0]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_libvirt [p_libvirt]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_lvm_vm [p_lvm_vm]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  vm_test2       (ocf::heartbeat:VirtualDomain): Started vm-a
  vm_test3       (ocf::heartbeat:VirtualDomain): Started vm-a
  vm_test4       (ocf::heartbeat:VirtualDomain): Started vm-a

Failed actions:
     stonith_external_aten_vm-a_stop_0 (node=vm-b, call=-1, rc=1, 
status=Timed Out, last-rc-change=Wed Jul 16 11:59:57 2014
, queued=0ms, exec=0ms
): unknown error
     p_dlm_start_0 (node=vm-b, call=-1, rc=1, status=Timed Out, 
last-rc-change=Wed Jul 16 11:53:17 2014
, queued=0ms, exec=0ms
): unknown error
     p_drbd_r0_start_0 (node=vm-b, call=-1, rc=1, status=Timed Out, 
last-rc-change=Wed Jul 16 11:56:37 2014
, queued=0ms, exec=0ms
): unknown error
     p_drbd_r1_start_0 (node=vm-b, call=-1, rc=1, status=Timed Out, 
last-rc-change=Wed Jul 16 11:56:37 2014
, queued=0ms, exec=0ms
): unknown error
*********************************

Eventually, after everything has timed-out
vm-b is STONITH'd

*********************************
Online: [ vm-a ]
OFFLINE: [ vm-b ]

Full list of resources:

  cluster-ip     (ocf::heartbeat:IPaddr2):       Started vm-a
  stonith_external_aten_vm-a (stonith:external/aten-pe-snmp):        Stopped
  stonith_external_aten_vm-b (stonith:external/aten-pe-snmp):        
Started vm-a
  vm_test1       (ocf::heartbeat:VirtualDomain): Started vm-a
  Master/Slave Set: ms_drbd_r0 [p_drbd_r0]
      Masters: [ vm-a ]
      Stopped: [ vm-b ]
  Master/Slave Set: ms_drbd_r1 [p_drbd_r1]
      Masters: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_clvm [p_clvm]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_dlm [p_dlm]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_fs_r0 [p_fs_r0]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_libvirt [p_libvirt]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  Clone Set: cl_lvm_vm [p_lvm_vm]
      Started: [ vm-a ]
      Stopped: [ vm-b ]
  vm_test2       (ocf::heartbeat:VirtualDomain): Started vm-a
  vm_test3       (ocf::heartbeat:VirtualDomain): Started vm-a
  vm_test4       (ocf::heartbeat:VirtualDomain): Started vm-a
*********************************

Once vm-b boots back up,
all seems fine again

*********************************
Online: [ vm-a vm-b ]

Full list of resources:

  cluster-ip     (ocf::heartbeat:IPaddr2):       Started vm-a
  stonith_external_aten_vm-a (stonith:external/aten-pe-snmp):        
Started vm-b
  stonith_external_aten_vm-b (stonith:external/aten-pe-snmp):        
Started vm-a
  vm_test1       (ocf::heartbeat:VirtualDomain): Started vm-b
  Master/Slave Set: ms_drbd_r0 [p_drbd_r0]
      Masters: [ vm-a vm-b ]
  Master/Slave Set: ms_drbd_r1 [p_drbd_r1]
      Masters: [ vm-a vm-b ]
  Clone Set: cl_clvm [p_clvm]
      Started: [ vm-a vm-b ]
  Clone Set: cl_dlm [p_dlm]
      Started: [ vm-a vm-b ]
  Clone Set: cl_fs_r0 [p_fs_r0]
      Started: [ vm-a vm-b ]
  Clone Set: cl_libvirt [p_libvirt]
      Started: [ vm-a vm-b ]
  Clone Set: cl_lvm_vm [p_lvm_vm]
      Started: [ vm-a vm-b ]
  vm_test2       (ocf::heartbeat:VirtualDomain): Started vm-a
  vm_test3       (ocf::heartbeat:VirtualDomain): Started vm-b
  vm_test4       (ocf::heartbeat:VirtualDomain): Started vm-a
*********************************

Any pointers as to why this is happening?

Full CIB config below

*********************************
node $id="168440321" vm-a \
         attributes standby="off"
node $id="168440322" vm-b \
         attributes standby="off"
primitive cluster-ip ocf:heartbeat:IPaddr2 \
         params ip="192.168.124.200" cidr_netmask="16" 
broadcast="192.168.255.255" nic="br0" \
         op monitor interval="10s"
primitive p_clvm ocf:lvm2:clvmd \
         params daemon_timeout="30" \
         op start interval="0" timeout="180" \
         op stop interval="0" timeout="180" \
         meta target-role="Started"
primitive p_dlm ocf:pacemaker:controld \
         operations $id="dlm" \
         op monitor interval="10" timeout="20" start-delay="0" \
         op start interval="0" timeout="180" \
         op stop interval="0" timeout="180" \
         params args="-q 0"
primitive p_drbd_r0 ocf:linbit:drbd \
         params drbd_resource="r0" \
         op start interval="0" timeout="380" \
         op stop interval="0" timeout="180" \
         op monitor interval="29s" role="Master" \
         op monitor interval="31s" role="Slave"
primitive p_drbd_r1 ocf:linbit:drbd \
         params drbd_resource="r1" \
         op start interval="0" timeout="380" \
         op stop interval="0" timeout="180" \
         op monitor interval="29s" role="Master" \
         op monitor interval="31s" role="Slave"
primitive p_fs_r0 ocf:heartbeat:Filesystem \
         params device="/dev/drbd0" directory="/replica" fstype="gfs2" \
         op start interval="0" timeout="180" \
         op stop interval="0" timeout="180" \
         op monitor interval="60" timeout="40"
primitive p_libvirt upstart:libvirt-bin \
         op start interval="0" timeout="180" \
         op stop interval="0" timeout="180" \
         op monitor timeout="15" interval="15" start-delay="15"
primitive p_lvm_vm ocf:heartbeat:LVM \
         params volgrpname="vm" \
         op start interval="0" timeout="380s" \
         op stop interval="0" timeout="180s" \
         op monitor interval="30" timeout="100" depth="0"
primitive stonith_external_aten_vm-a stonith:external/aten-pe-snmp \
         params hostname="vm-a" pduip="192.168.124.10" outlet="1" \
         operations $id="stonith_external_aten_vm-a-operations" \
         op start interval="0" timeout="60" \
         op stop interval="0" timeout="60" \
         op monitor interval="60" timeout="60" start-delay="0"
primitive stonith_external_aten_vm-b stonith:external/aten-pe-snmp \
         params hostname="vm-b" pduip="192.168.124.10" outlet="2" \
         operations $id="stonith_external_aten_vm-b-operations" \
         op start interval="0" timeout="60" \
         op stop interval="0" timeout="60" \
         op monitor interval="60" timeout="60" start-delay="0"
primitive vm_test1 ocf:heartbeat:VirtualDomain \
         params config="/etc/libvirt/qemu/test1.xml" 
hypervisor="qemu:///system" migration_transport="ssh" \
         meta allow-migrate="true" target-role="Started" \
         op start timeout="240s" interval="0" \
         op stop timeout="120s" interval="0" \
         op monitor timeout="30" interval="10" depth="0" \
         op migrate_from timeout="240s" interval="0" \
         op migrate_to timeout="720s" interval="0" \
         utilization cpu="1" hv_memory="1024"
primitive vm_test2 ocf:heartbeat:VirtualDomain \
         params config="/etc/libvirt/qemu/test2.xml" 
hypervisor="qemu:///system" migration_transport="ssh" \
         meta allow-migrate="true" target-role="Started" \
         op start timeout="240s" interval="0" \
         op stop timeout="120s" interval="0" \
         op monitor timeout="30" interval="10" depth="0" \
         op migrate_from timeout="240s" interval="0" \
         op migrate_to timeout="720s" interval="0" \
         utilization cpu="1" hv_memory="1024"
primitive vm_test3 ocf:heartbeat:VirtualDomain \
         params config="/etc/libvirt/qemu/test3.xml" 
hypervisor="qemu:///system" migration_transport="ssh" \
         meta allow-migrate="true" target-role="Started" \
         op start timeout="240s" interval="0" \
         op stop timeout="120s" interval="0" \
         op monitor timeout="30" interval="10" depth="0" \
         op migrate_from timeout="240s" interval="0" \
         op migrate_to timeout="720s" interval="0" \
         utilization cpu="1" hv_memory="1024"
primitive vm_test4 ocf:heartbeat:VirtualDomain \
         params config="/etc/libvirt/qemu/test4.xml" 
hypervisor="qemu:///system" migration_transport="ssh" \
         meta allow-migrate="true" target-role="Started" \
         op start timeout="240s" interval="0" \
         op stop timeout="120s" interval="0" \
         op monitor timeout="30" interval="10" depth="0" \
         op migrate_from timeout="240s" interval="0" \
         op migrate_to timeout="720s" interval="0" \
         utilization cpu="1" hv_memory="1024"
ms ms_drbd_r0 p_drbd_r0 \
         meta master-max="2" master-node-max="1" clone-max="2" 
clone-node-max="1" notify="true"
ms ms_drbd_r1 p_drbd_r1 \
         meta master-max="2" master-node-max="1" clone-max="2" 
clone-node-max="1" notify="true"
clone cl_clvm p_clvm \
         meta interleave="true"
clone cl_dlm p_dlm \
         meta interleave="true"
clone cl_fs_r0 p_fs_r0 \
         meta interleave="true"
clone cl_libvirt p_libvirt \
         meta interleave="true"
clone cl_lvm_vm p_lvm_vm \
         meta interleave="true"
location l_stonith_external_aten_vm-a stonith_external_aten_vm-a -inf: vm-a
location l_stonith_external_aten_vm-b stonith_external_aten_vm-b -inf: vm-b
colocation co_clvm_with_dlm inf: cl_clvm cl_dlm
colocation co_fs_with_drbd inf: cl_fs_r0 ms_drbd_r0:Master
colocation co_libvirt_with_lvm_vm inf: cl_libvirt cl_lvm_vm
colocation co_lvm_vm_with_clvm inf: cl_lvm_vm cl_clvm
colocation co_lvm_vm_with_drbd inf: cl_lvm_vm ms_drbd_r1:Master
colocation co_lvm_vm_with_fsr0 inf: cl_lvm_vm cl_fs_r0
colocation co_vm_test1_with_libvirt inf: vm_test1 cl_libvirt
colocation co_vm_test2_with_libvirt inf: vm_test2 cl_libvirt
colocation co_vm_test3_with_libvirt inf: vm_test3 cl_libvirt
colocation co_vm_test4_with_libvirt inf: vm_test4 cl_libvirt
order o_clvm_after_dlm inf: cl_dlm:start cl_clvm:start
order o_fs_after_drbd inf: ms_drbd_r0:promote cl_fs_r0:start
order o_libvirt_after_lvm_vm inf: cl_lvm_vm:start cl_libvirt:start
order o_lvm_vm_after_clvm inf: cl_clvm:start cl_lvm_vm:start
order o_lvm_vm_after_drbd inf: ms_drbd_r1:promote cl_lvm_vm:start
order o_lvm_vm_after_fsr0 inf: cl_fs_r0:start cl_lvm_vm:start
order o_vm_test1_after_libvirt inf: cl_libvirt:start vm_test1:start
order o_vm_test2_after_libvirt inf: cl_libvirt:start vm_test2:start
order o_vm_test3_after_libvirt inf: cl_libvirt:start vm_test3:start
order o_vm_test4_after_libvirt inf: cl_libvirt:start vm_test4:start
property $id="cib-bootstrap-options" \
         dc-version="1.1.10-42f2063" \
         cluster-infrastructure="corosync" \
         stonith-enabled="true" \
         no-quorum-policy="ignore" \
         last-lrm-refresh="1405506030" \
         maintenance-mode="false"



More information about the Pacemaker mailing list