[Pacemaker] 2-node active/active cluster serving virtual machines (KVM via libvirt)

Vladislav Bogdanov bubble at hoster-ok.com
Mon Jun 30 09:08:32 EDT 2014


30.06.2014 15:45, Tony Atkinson wrote:
> Hi all,
> I'd really appreciate a helping hand here
> I'm so close to getting what I need, but just seem to be falling short
> at the last hurdle.
> 
> 2-node active/active cluster serving virtual machines (KVM via libvirt)
> Virtual machines need to be able to live-migrate between cluster nodes.
> 
> Nodes have local storage only
> Node storage replicated over DRBD (dual primary)
> LVM on DRBD, volumes given to virtual machines
> 
> Output from crm_mon below (full pacemaker defs at end)

...

> I think the issue is with the LVM volume group definition.
> 
> How would I prevent the VMs rebooting when a node comes back online?
> 
> Any help would be greatly appreciated.
> 
> *********
> 
> node $id="168440321" vm-a \
>         attributes standby="off"
> node $id="168440322" vm-b \
>         attributes standby="off"
> primitive cluster-ip ocf:heartbeat:IPaddr2 \
>         params ip="192.168.123.200" cidr_netmask="16"
> broadcast="192.168.255.255" nic="br0" \
>         op monitor interval="10s"
> primitive p_clvm ocf:lvm2:clvmd \
>         params daemon_timeout="30" \
>         meta target-role="Started"
> primitive p_dlm ocf:pacemaker:controld \
>         operations $id="dlm" \
>         op monitor interval="10" timeout="20" start-delay="0" \
>         params args="-q 0"
> primitive p_drbd_r0 ocf:linbit:drbd \
>         params drbd_resource="r0" \
>         op start interval="0" timeout="240" \
>         op stop interval="0" timeout="100" \
>         op monitor interval="29s" role="Master" \
>         op monitor interval="31s" role="Slave"
> primitive p_drbd_r1 ocf:linbit:drbd \
>         params drbd_resource="r1" \
>         op start interval="0" timeout="330" \
>         op stop interval="0" timeout="100" \
>         op monitor interval="59s" role="Master" timeout="30s" \
>         op monitor interval="60s" role="Slave" timeout="30s" \
>         meta target-role="Master"
> primitive p_fs_r0 ocf:heartbeat:Filesystem \
>         params device="/dev/drbd0" directory="/replica" fstype="gfs2" \
>         op start interval="0" timeout="60" \
>         op stop interval="0" timeout="60" \
>         op monitor interval="60" timeout="40"
> primitive p_lvm_vm ocf:heartbeat:LVM \
>         params volgrpname="vm" \
>         op start interval="0" timeout="30s" \
>         op stop interval="0" timeout="30s" \
>         op monitor interval="30" timeout="100" depth="0"
> primitive vm_test1 ocf:heartbeat:VirtualDomain \
>         params config="/etc/libvirt/qemu/test1.xml"
> hypervisor="qemu:///system" migration_transport="ssh" \
>         meta allow-migrate="true" target-role="Started" \
>         op start timeout="240s" interval="0" \
>         op stop timeout="120s" interval="0" \
>         op monitor timeout="30" interval="10" depth="0" \
>         utilization cpu="1" hv_memory="1024"
> primitive vm_test2 ocf:heartbeat:VirtualDomain \
>         params config="/etc/libvirt/qemu/test2.xml"
> hypervisor="qemu:///system" migration_transport="ssh" \
>         meta allow-migrate="true" target-role="Started" \
>         op start timeout="240s" interval="0" \
>         op stop timeout="120s" interval="0" \
>         op monitor timeout="30" interval="10" depth="0" \
>         utilization cpu="1" hv_memory="1024"
> ms ms_drbd_r0 p_drbd_r0 \
>         meta master-max="2" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true"
> ms ms_drbd_r1 p_drbd_r1 \
>         meta master-max="2" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true"
> clone cl_clvm p_clvm \
>         meta interleave="true"
> clone cl_dlm p_dlm \
>         meta interleave="true"
> clone cl_fs_r0 p_fs_r0 \
>         meta interleave="true"
> clone cl_lvm_vm p_lvm_vm \
>         meta interleave="true"
> colocation co_fs_with_drbd inf: cl_fs_r0 ms_drbd_r0:Master
> order o_order_default Mandatory: cl_dlm ms_drbd_r0:promote cl_fs_r0
> cl_clvm ms_drbd_r1:promote cl_lvm_vm:start vm_test1
> order o_order_default2 Mandatory: cl_dlm ms_drbd_r0:promote cl_fs_r0
> cl_clvm ms_drbd_r1:promote cl_lvm_vm:start vm_test2

You'd split these two into pieces and add colocation constraints.
Such the whole constraints block looks as

colocation co_clvm_with_dlm inf: cl_clvm cl_dlm
order o_clvm_after_dlm inf: cl_dlm:start cl_clvm:start
colocation co_fs_with_drbd inf: cl_fs_r0 ms_drbd_r0:Master
order o_fs_after_drbd inf: ms_drbd_r0:promote cl_fs_r0:start
colocation co_lvm_vm_with_clvm inf: cl_lvm_vm cl_clvm
order o_lvm_vm_after_clvm inf: cl_clvm:start cl_lvm_vm:start
colocation co_vm_test1_with_lvm_vm inf: vm_test1 cl_lvm_vm
order o_vm_test1_after_lvm_vm inf: cl_lvm_vm vm_test1
colocation co_vm_test2_with_lvm_vm inf: vm_test2 cl_lvm_vm
order o_vm_test2_after_lvm_vm inf: cl_lvm_vm vm_test2

This (forgive me if I mistyped somewhere) should prevent p_lvm_vm from
being stopped when you don't want it to do that.

> property $id="cib-bootstrap-options" \
>         dc-version="1.1.10-42f2063" \
>         cluster-infrastructure="corosync" \
>         stonith-enabled="false" \
>         no-quorum-policy="ignore" \
>         last-lrm-refresh="1404130649"
> rsc_defaults $id="rsc-options" \
>         resource-stickiness="100"
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org





More information about the Pacemaker mailing list