[Pacemaker] 2-node active/active cluster serving virtual machines (KVM via libvirt)

Tony Atkinson tony.atkinson at dataproservices.co.uk
Mon Jun 30 08:45:29 EDT 2014


Hi all,
I'd really appreciate a helping hand here
I'm so close to getting what I need, but just seem to be falling short 
at the last hurdle.

2-node active/active cluster serving virtual machines (KVM via libvirt)
Virtual machines need to be able to live-migrate between cluster nodes.

Nodes have local storage only
Node storage replicated over DRBD (dual primary)
LVM on DRBD, volumes given to virtual machines

Output from crm_mon below (full pacemaker defs at end)

*********

Online: [ vm-a vm-b ]

cluster-ip      (ocf::heartbeat:IPaddr2):       Started vm-a
  Clone Set: cl_dlm [p_dlm]
      Started: [ vm-a vm-b ]
  Master/Slave Set: ms_drbd_r0 [p_drbd_r0]
      Masters: [ vm-a vm-b ]
  Clone Set: cl_fs_r0 [p_fs_r0]
      Started: [ vm-a vm-b ]
  Clone Set: cl_clvm [p_clvm]
      Started: [ vm-a vm-b ]
  Master/Slave Set: ms_drbd_r1 [p_drbd_r1]
      Masters: [ vm-a vm-b ]
vm_test1        (ocf::heartbeat:VirtualDomain): Started vm-b
  Clone Set: cl_lvm_vm [p_lvm_vm]
      Started: [ vm-a vm-b ]
vm_test2        (ocf::heartbeat:VirtualDomain): Started vm-a

*********

Put node "vm-a" into standby.
Test VM on node gets successfully live-migrated to other node.
Working as expected.

Bring node "vm-a" back online.
Both VMs get rebooted.
Pacemaker fails to stop LVM volume group on "vm-b" (why is it doing this?)

*********

Failed actions:
     p_lvm_vm_stop_0 (node=vm-b, call=838, rc=1, status=complete, 
last-rc-change=Mon Jun 30 13:32:46 2014
, queued=134ms, exec=0ms
): unknown error

*********

 From vm-b logs

Jun 30 13:32:50 vm-a pengine[1806]:   notice: unpack_config: On loss of 
CCM Quorum: Ignore
Jun 30 13:32:50 vm-a pengine[1806]:  warning: unpack_rsc_op: Processing 
failed op stop for p_lvm_vm:0 on vm-b: unknown error (1)
Jun 30 13:32:50 vm-a pengine[1806]:  warning: common_apply_stickiness: 
Forcing cl_lvm_vm away from vm-b after 1000000 failures (max=1000000)
Jun 30 13:32:50 vm-a pengine[1806]:  warning: common_apply_stickiness: 
Forcing cl_lvm_vm away from vm-b after 1000000 failures (max=1000000)

*********

Issuing a resource cleanup on vm-b puts things back to normal
# crm resource cleanup p_lvm_vm vm-b

Online: [ vm-a vm-b ]

cluster-ip      (ocf::heartbeat:IPaddr2):       Started vm-b
  Clone Set: cl_dlm [p_dlm]
      Started: [ vm-a vm-b ]
  Master/Slave Set: ms_drbd_r0 [p_drbd_r0]
      Masters: [ vm-a vm-b ]
  Clone Set: cl_fs_r0 [p_fs_r0]
      Started: [ vm-a vm-b ]
  Clone Set: cl_clvm [p_clvm]
      Started: [ vm-a vm-b ]
  Master/Slave Set: ms_drbd_r1 [p_drbd_r1]
      Masters: [ vm-a vm-b ]
vm_test1        (ocf::heartbeat:VirtualDomain): Started vm-a
  Clone Set: cl_lvm_vm [p_lvm_vm]
      Started: [ vm-a vm-b ]
vm_test2        (ocf::heartbeat:VirtualDomain): Started vm-a

*********

I think the issue is with the LVM volume group definition.

How would I prevent the VMs rebooting when a node comes back online?

Any help would be greatly appreciated.

*********

node $id="168440321" vm-a \
         attributes standby="off"
node $id="168440322" vm-b \
         attributes standby="off"
primitive cluster-ip ocf:heartbeat:IPaddr2 \
         params ip="192.168.123.200" cidr_netmask="16" 
broadcast="192.168.255.255" nic="br0" \
         op monitor interval="10s"
primitive p_clvm ocf:lvm2:clvmd \
         params daemon_timeout="30" \
         meta target-role="Started"
primitive p_dlm ocf:pacemaker:controld \
         operations $id="dlm" \
         op monitor interval="10" timeout="20" start-delay="0" \
         params args="-q 0"
primitive p_drbd_r0 ocf:linbit:drbd \
         params drbd_resource="r0" \
         op start interval="0" timeout="240" \
         op stop interval="0" timeout="100" \
         op monitor interval="29s" role="Master" \
         op monitor interval="31s" role="Slave"
primitive p_drbd_r1 ocf:linbit:drbd \
         params drbd_resource="r1" \
         op start interval="0" timeout="330" \
         op stop interval="0" timeout="100" \
         op monitor interval="59s" role="Master" timeout="30s" \
         op monitor interval="60s" role="Slave" timeout="30s" \
         meta target-role="Master"
primitive p_fs_r0 ocf:heartbeat:Filesystem \
         params device="/dev/drbd0" directory="/replica" fstype="gfs2" \
         op start interval="0" timeout="60" \
         op stop interval="0" timeout="60" \
         op monitor interval="60" timeout="40"
primitive p_lvm_vm ocf:heartbeat:LVM \
         params volgrpname="vm" \
         op start interval="0" timeout="30s" \
         op stop interval="0" timeout="30s" \
         op monitor interval="30" timeout="100" depth="0"
primitive vm_test1 ocf:heartbeat:VirtualDomain \
         params config="/etc/libvirt/qemu/test1.xml" 
hypervisor="qemu:///system" migration_transport="ssh" \
         meta allow-migrate="true" target-role="Started" \
         op start timeout="240s" interval="0" \
         op stop timeout="120s" interval="0" \
         op monitor timeout="30" interval="10" depth="0" \
         utilization cpu="1" hv_memory="1024"
primitive vm_test2 ocf:heartbeat:VirtualDomain \
         params config="/etc/libvirt/qemu/test2.xml" 
hypervisor="qemu:///system" migration_transport="ssh" \
         meta allow-migrate="true" target-role="Started" \
         op start timeout="240s" interval="0" \
         op stop timeout="120s" interval="0" \
         op monitor timeout="30" interval="10" depth="0" \
         utilization cpu="1" hv_memory="1024"
ms ms_drbd_r0 p_drbd_r0 \
         meta master-max="2" master-node-max="1" clone-max="2" 
clone-node-max="1" notify="true"
ms ms_drbd_r1 p_drbd_r1 \
         meta master-max="2" master-node-max="1" clone-max="2" 
clone-node-max="1" notify="true"
clone cl_clvm p_clvm \
         meta interleave="true"
clone cl_dlm p_dlm \
         meta interleave="true"
clone cl_fs_r0 p_fs_r0 \
         meta interleave="true"
clone cl_lvm_vm p_lvm_vm \
         meta interleave="true"
colocation co_fs_with_drbd inf: cl_fs_r0 ms_drbd_r0:Master
order o_order_default Mandatory: cl_dlm ms_drbd_r0:promote cl_fs_r0 
cl_clvm ms_drbd_r1:promote cl_lvm_vm:start vm_test1
order o_order_default2 Mandatory: cl_dlm ms_drbd_r0:promote cl_fs_r0 
cl_clvm ms_drbd_r1:promote cl_lvm_vm:start vm_test2
property $id="cib-bootstrap-options" \
         dc-version="1.1.10-42f2063" \
         cluster-infrastructure="corosync" \
         stonith-enabled="false" \
         no-quorum-policy="ignore" \
         last-lrm-refresh="1404130649"
rsc_defaults $id="rsc-options" \
         resource-stickiness="100"




More information about the Pacemaker mailing list