[Pacemaker] DRBD+LVM+NFS problems
Dennis Jacobfeuerborn
dennisml at conversis.de
Mon Mar 25 12:09:28 UTC 2013
I just found the following in the dmesg output which might or might not
add to understanding the problem:
device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
device-mapper: ioctl: error adding target to table
Regards,
Dennis
On 25.03.2013 13:04, Dennis Jacobfeuerborn wrote:
> Hi,
> I'm currently trying create a two node redundant NFS setup on CentOS 6.4
> using pacemaker and crmsh.
>
> I use this Document as a starting poing:
> https://www.suse.com/documentation/sle_ha/singlehtml/book_sleha_techguides/book_sleha_techguides.html
>
>
> The first issue is that using these instructions I get the cluster up
> and running but the moment I try to stop the pacemaker service on the
> current master node several resources just fail and everything goes
> pear-shaped.
>
> Since the problem seems to relate to the nfs bits in the configuration I
> removed these in order to get to a minimal working setup and then add
> things piece by piece in order to find the source of the problem.
>
> Now I am at a point where I basically have only
> DRBD+LVM+Filesystems+IPAddr2 configured and now LVM seems to act up.
>
> I can start the cluster and everything is fine but the moment I stop
> pacemaker on the master i end up with this as a status:
>
> ===
> Node nfs2: standby
> Online: [ nfs1 ]
>
> Master/Slave Set: ms_drbd_nfs [p_drbd_nfs]
> Masters: [ nfs1 ]
> Stopped: [ p_drbd_nfs:1 ]
>
> Failed actions:
> p_lvm_nfs_start_0 (node=nfs1, call=505, rc=1, status=complete):
> unknown error
> ===
>
> and in the log on nfs1 I see:
> LVM(p_lvm_nfs)[7515]: 2013/03/25_12:34:21 ERROR: device-mapper:
> reload ioctl on failed: Invalid argument device-mapper: reload ioctl on
> failed: Invalid argument 2 logical volume(s) in volume group "nfs" now
> active
>
> However a lvs in this state shows:
> [root at nfs1 ~]# lvs
> LV VG Attr LSize Pool Origin Data% Move Log
> web1 nfs -wi------ 2,00g
> web2 nfs -wi------ 2,00g
> lv_root vg_nfs1.local -wi-ao--- 2,45g
> lv_swap vg_nfs1.local -wi-ao--- 256,00m
>
> So the volume group is present.
>
> My current configuration looks like this:
>
> node nfs1 \
> attributes standby="off"
> node nfs2 \
> attributes standby="on"
> primitive p_drbd_nfs ocf:linbit:drbd \
> params drbd_resource="nfs" \
> op monitor interval="15" role="Master" \
> op monitor interval="30" role="Slave"
> primitive p_fs_web1 ocf:heartbeat:Filesystem \
> params device="/dev/nfs/web1" \
> directory="/srv/nfs/web1" \
> fstype="ext4" \
> op monitor interval="10s"
> primitive p_fs_web2 ocf:heartbeat:Filesystem \
> params device="/dev/nfs/web2" \
> directory="/srv/nfs/web2" \
> fstype="ext4" \
> op monitor interval="10s"
> primitive p_ip_nfs ocf:heartbeat:IPaddr2 \
> params ip="10.99.0.142" cidr_netmask="24" \
> op monitor interval="30s"
> primitive p_lvm_nfs ocf:heartbeat:LVM \
> params volgrpname="nfs" \
> op monitor interval="30s"
> group g_nfs p_lvm_nfs p_fs_web1 p_fs_web2 p_ip_nfs
> ms ms_drbd_nfs p_drbd_nfs \
> meta master-max="1" \
> master-node-max="1" \
> clone-max="2" \
> clone-node-max="1" \
> notify="true"
> colocation c_nfs_on_drbd inf: g_nfs ms_drbd_nfs:Master
> property $id="cib-bootstrap-options" \
> dc-version="1.1.8-7.el6-394e906" \
> cluster-infrastructure="classic openais (with plugin)" \
> expected-quorum-votes="2" \
> stonith-enabled="false" \
> no-quorum-policy="ignore" \
> last-lrm-refresh="1364212090" \
> maintenance-mode="false"
> rsc_defaults $id="rsc_defaults-options" \
> resource-stickiness="100"
>
> Any ideas why this isn't working?
>
> Regards,
> Dennis
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Pacemaker
mailing list