[Pacemaker] drbd + lvm

Lars Ellenberg lars.ellenberg at linbit.com
Fri Jun 13 01:50:01 CEST 2014


On Thu, Mar 13, 2014 at 03:57:28PM -0400, David Vossel wrote:
> 
> 
> 
> 
> ----- Original Message -----
> > From: "Infoomatic" <infoomatic at gmx.at>
> > To: pacemaker at oss.clusterlabs.org
> > Sent: Thursday, March 13, 2014 2:26:00 PM
> > Subject: [Pacemaker] drbd + lvm
> > 
> > Hi list,
> > 
> > I am having troubles with pacemaker and lvm and stacked drbd resources.
> > The system consists of 2 Ubuntu 12 LTS servers, each having two partitions of
> > an underlying raid 1+0 as volume group with one LV each as a drbd backing
> > device. The purpose is for usage with VMs and adjusting needed disk space
> > flexible, so on top of the drbd resources there are LVs for each VM.
> > I created a stack with LCMC, which is like:
> > 
> > DRBD->LV->libvirt and
> > DRBD->LV->Filesystem->lxc
> > 
> > The problem now: the system has "hickups" - when VM01 runs on HOST01 (being
> > primary DRBD) and HOST02 is restarting, lvm is reloaded (at boot time) and
> > the LVs are being activated. This of course results in an error, the log
> > entry:
> > 
> > Mar 13 17:58:42 host01 pengine: [27563]: ERROR: native_create_actions:
> > Resource res_LVM_1 (ocf::LVM) is active on 2 nodes attempting recovery
> > 
> > Therefore, as configured, the resource is stopped and started again (on only
> > one node). Thus, all VMs and containers relying on this are also restared.
> > 
> > When I disable the LVs that use the DRBD resource at boot (lvm.conf:
> > volume_list only containing the VG from the partitions of the raidsystem) a
> > reboot of the secondary does not restart the VMs running on the primary.
> > However, if the primary goes down (e.g. power interruption), the secondary
> > cannot activate the LVs of the VMs because they are not in the list of
> > lvm.conf to be activated.
> > 
> > Has anyone had this issue and resolved it? Any ideas? Thanks in advance!
> 
> Yep, i've hit this as well. Use the latest LVM agent. I already fixed all of this.

I you exclude the DRBD lower level devices in your lvm.conf filter
(and update your initramfs to have a proper copy of that lvm.conf),
and only allow them to be accessed via DRBD,
LVM cannot possibly activate them "on boot".
But only after DRBD was promoted.
Which supposedly happens via pacemaker only.
And unless some udev rule auto-activates any VG found immediately,
it should only be activated via pacemaker as well.

So something like that should be in your lvm.conf:
  filter = [ "a|^/dev/your/system/PVs|", "a|^/dev/drbd|", "r|.|" ]

> https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM
> 
> Keep your volume_list the way it is and use the 'exclusive=true' LVM
> option.   This will allow the LVM agent to activate volumes that don't
> exist in the volume_list.

That is a nice feature, but if I'm correct, it is unrelated here.

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.



More information about the Pacemaker mailing list