[Pacemaker] cannot mount gfs2 filesystem

Soni Maula Harriz soni.harriz at sangkuriang.co.id
Mon Oct 29 01:22:08 EDT 2012


dear all,
i configure pacemaker and corosync on 2 Centos 6.3 servers by following
instruction on 'Cluster from Scratch'.
on the beginning, i follow 'Cluster from Scratch' edition 5. but, since i
use centos, i change to 'Cluster from Scratch' edition 3 to configure
active/active servers.
Now on 1st server (cluster1), the Filesystem resource cannot start. the
gfs2 filesystem can't be mounted.

this is the crm configuration
[root at cluster2 ~]# crm configure show
node cluster1 \
    attributes standby="off"
node cluster2 \
    attributes standby="off"
primitive ClusterIP ocf:heartbeat:IPaddr2 \
    params ip="xxx.xxx.xxx.229" cidr_netmask="32" clusterip_hash="sourceip"
\
    op monitor interval="30s"
primitive WebData ocf:linbit:drbd \
    params drbd_resource="wwwdata" \
    op monitor interval="60s"
primitive WebFS ocf:heartbeat:Filesystem \
    params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html"
fstype="gfs2"
primitive WebSite ocf:heartbeat:apache \
    params configfile="/etc/httpd/conf/httpd.conf" statusurl="
http://localhost/server-status" \
    op monitor interval="1min"
ms WebDataClone WebData \
    meta master-max="2" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
clone WebFSClone WebFS
clone WebIP ClusterIP \
    meta globally-unique="true" clone-max="2" clone-node-max="1"
interleave="false"
clone WebSiteClone WebSite \
    meta interleave="false"
colocation WebSite-with-WebFS inf: WebSiteClone WebFSClone
colocation colocation-WebSite-ClusterIP-INFINITY inf: WebSiteClone WebIP
colocation fs_on_drbd inf: WebFSClone WebDataClone:Master
order WebFS-after-WebData inf: WebDataClone:promote WebFSClone:start
order WebSite-after-WebFS inf: WebFSClone WebSiteClone
order order-ClusterIP-WebSite-mandatory : WebIP:start WebSiteClone:start
property $id="cib-bootstrap-options" \
    dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
    cluster-infrastructure="cman" \
    expected-quorum-votes="2" \
    stonith-enabled="false" \
    no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
    resource-stickiness="100"

when i want to mount the filesystem manually, this message appear :
[root at cluster1 ~]# mount /dev/drbd1 /mnt/
mount point already used or other mount in progress
error mounting lockproto lock_dlm

but when i check the mount, there is no mount from drbd

there is another strange thing, the 1st server (cluster1) cannot reboot. it
hangs with message 'please standby while rebooting the system'. in the
reboot process, there are 2 failed action which is related to fencing. i
didn't configure any fencing yet. one of the failed action is :
'stopping cluster
leaving fence domain .... found dlm lockspace /sys/kernel/dlm/web
fence_tool : cannot leave due to active system       [FAILED]'

please help me with this problem

-- 
Best Regards,

Soni Maula Harriz
Database Administrator
PT. Data Aksara Sangkuriang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20121029/e831089a/attachment-0002.html>


More information about the Pacemaker mailing list