[Pacemaker] 2-node cluster with shared storage: what is current solution

Саша Александров shurrman at gmail.com
Thu Mar 20 16:09:59 EDT 2014


Hi!

Well, since I needed one thing - only one node starts the database on
shared storage - I made an ugly dirty hack :-), that seems to work for me.
I wrote a custom RA, that relies on frequent 'monitor' actions, and simply
writes timestamp+hostname to physical partition. In case it detects that
someone else writes to the same device - it considers that it has to be
stopped. Putting this RA first in the group prevents database from starting
if other node started the group - even in case there is no network
connectivity.

Probably needs more testing :-)

oralock_start() {
        oralock_monitor ; rc=$?
        if [ $rc = $OCF_SUCCESS ]; then
                ocf_log info "oralock already running."
                exit $OCF_SUCCESS
        fi
        NEW=`date +%s``hostname`
        echo $NEW > $LFILE
        oralock_monitor ; rc=$?
        if [ $rc = $OCF_SUCCESS ]; then
                ocf_log info "oralock started."
                exit $OCF_SUCCESS
        fi
        exit $rc
}

oralock_stop() {
        rm -f $LFILE
        exit $OCF_NOT_RUNNING
}

oralock_monitor() {
        [[ ! -s $LFILE ]] && return $OCF_NOT_RUNNING
        PREV=`cat $LFILE`
        CURR=`dd if=$DEVICE of=/dev/stdout bs=16 count=1 2>/dev/null`
        ocf_log info "File: $PREV, device: $CURR"
        if [[ "$PREV" != "$CURR" ]]; then
            for i in 1 2 3; do
                sleep 5
                NCURR=`dd if=$DEVICE of=/dev/stdout bs=16 count=1
2>/dev/null`
                if [[ "$CURR" != "$NCURR" ]]; then
                    ocf_log err "Device changed: was $CURR, now: $NCURR!
Someone is writing to device!"
                    rm -f $LFILE
                    return $OCF_NOT_RUNNING
                else
                    ocf_log info "Device not changed..."
                fi
            done
        fi
        NEW=`date +%s``hostname`
        echo $NEW > $LFILE
        dd if=$LFILE of=$DEVICE bs=16 count=1 2>/dev/null
        return $OCF_SUCCESS
}




2014-03-20 19:34 GMT+04:00 Саша Александров <shurrman at gmail.com>:

> Hi!
>
> I removed all clustr-related staff and installed from
> http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/x86_64/
> However, stonith-ng uses fence_* agents here...  So I cannot put into
> crmsh
>
> primitive stonith_sbd stonith:external/sbd
>
> :-(
>
>
> 2014-03-19 20:14 GMT+04:00 Lars Marowsky-Bree <lmb at suse.com>:
>
> On 2014-03-19T19:20:35, Саша Александров <shurrman at gmail.com> wrote:
>>
>> > Now, we got shared storage over multipath FC there, so we need to move
>> from
>> > drbd to shared storage. And I got totally confused now - I can not find
>> a
>> > guide on how to set things up. I see two options:
>> > - use gfs2
>> > - use ext4 with sbd
>>
>> If you don't need concurrent access from both nodes to the same file
>> system, using ext4/XFS in a fail-over configuration is to be preferred
>> over the complexity of a cluster file system like GFS2/OCFS2.
>>
>> RHT has chosen to not ship sbd, unfortunately, so you can't use this
>> very reliable fencing mechanism on CentOS/RHEL. Or you'd have to build
>> it yourself. Assuming you have hardware fencing right now, you can
>> continue to use that too.
>>
>>
>> Regards,
>>     Lars
>>
>> --
>> Architect Storage/HA
>> SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix
>> Imendörffer, HRB 21284 (AG Nürnberg)
>> "Experience is the name everyone gives to their mistakes." -- Oscar Wilde
>>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>
>


-- 
С уважением, ААА.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140321/84124c4f/attachment-0003.html>


More information about the Pacemaker mailing list