[Pacemaker] Configuring ocf:heartbeat:Filesystem monitoring
TigerInCanada
tigerincanada at gmail.com
Wed Apr 11 15:28:52 UTC 2012
Changing "depth" to "OCF_CHECK_LEVEL" does the trick, thank you!
Now I've just got to find a way to ensure the entire group stops once
the failure is detected.
On 11 April 2012 10:25, emmanuel segura <emi2fast at gmail.com> wrote:
> vim /usr/lib/ocf/resource.d/heartbeat/Filesystem
> ===========================================
> <longdesc lang="en">
> Resource script for Filesystem. It manages a Filesystem on a
> shared storage medium.
>
> The standard monitor operation of depth 0 (also known as probe)
> checks if the filesystem is mounted. If you want deeper tests,
> set OCF_CHECK_LEVEL to one of the following values:
>
> 10: read first 16 blocks of the device (raw read)
>
> This doesn't exercise the filesystem at all, but the device on
> which the filesystem lives. This is noop for non-block devices
> such as NFS, SMBFS, or bind mounts.
>
> 20: test if a status file can be written and read
>
> The status file must be writable by root. This is not always the
> case with an NFS mount, as NFS exports usually have the
> "root_squash" option set. In such a setup, you must either use
> read-only monitoring (depth=10), export with "no_root_squash" on
> your NFS server, or grant world write permissions on the
> directory where the status file is to be placed.
> </longdesc>
> <shortdesc lang="en">Manages filesystem mounts</shortdesc>
> =======================================================
>
> Il giorno 11 aprile 2012 15:22, Terry Johnson <terry.johnson at scribendi.com>
> ha scritto:
>>
>> Hi –
>>
>>
>>
>> I’m working on a two node cluster on CentOS 6.2, using iscsi shared
>> storage, and I’m having difficulty detecting a lost connection to the
>> storage.
>>
>>
>>
>> Maybe this is a n00b issue, but I understand that monitor depth="20" in
>> ocf:heartbeat:Filesystem is supposed to create a test file and check whether
>> it can write to that file, and declare the resource failed if the filesystem
>> goes read-only. The test file does not get created, and I can’t see where
>> any errors might be being logged to.
>>
>>
>>
>> I’ve tested this configuration by disabling a switch port. The second node
>> picks up the services correctly, but the first node keeps on running them
>> too, and does not notice that it no longer has a writeable filesystem. If
>> the port is reconnected, they both have the same ext3 filesystem mounted at
>> once, which makes a fine mess.
>>
>>
>>
>> Here’s my current configuration. Have I missed some vital detail? Should
>> I be brining the iscsi connection into pacemaker too?
>>
>>
>>
>> node dba
>>
>> node dbb
>>
>> primitive ClusterIP ocf:heartbeat:IPaddr2 \
>>
>> params ip="10.232.169.130" cidr_netmask="24" \
>>
>> op monitor interval="30s" nic="eth0" \
>>
>> meta target-role="Started"
>>
>> primitive p_fs_mysql ocf:heartbeat:Filesystem \
>>
>> params device="/dev/sdb1" directory="/mnt/mysql" fstype="ext3"
>> options="noatime,nodiratime,noexec" \
>>
>> op start interval="0" timeout="60" \
>>
>> op stop interval="0" timeout="240" \
>>
>> op monitor interval="30s" depth="20" \
>>
>> meta target-role="Started"
>>
>> primitive p_mysql lsb:mysql \
>>
>> op start interval="0" timeout="60s" \
>>
>> op stop interval="0" timeout="60s" \
>>
>> op monitor interval="15s" \
>>
>> meta target-role="Started"
>>
>> group g_mysql p_fs_mysql ClusterIP p_mysql \
>>
>> meta target-role="Started"
>>
>> property $id="cib-bootstrap-options" \
>>
>> dc-version="1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558"
>> \
>>
>> cluster-infrastructure="openais" \
>>
>> expected-quorum-votes="2" \
>>
>> stonith-enabled="false" \
>>
>> no-quorum-policy="ignore"
>>
>> rsc_defaults $id="rsc-options" \
>>
>> resource-stickiness="200"
>>
>>
>>
>> corosync (1.4.1-4.el6_2.1)
>>
>> pacemaker (1.1.6-3.el6)
>>
>>
>>
>> Any suggestions are appreciated.
>>
>>
>>
>> Terry.
>>
>>
>> Follow us on Twitter:
>> http://www.twitter.com/Scribendi_Inc
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>
>
>
> --
> esta es mi vida e me la vivo hasta que dios quiera
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
More information about the Pacemaker
mailing list