[Pacemaker] Fw: Fw: Configuration for FS over DRBD over LVM

Bob Schatz bschatz at yahoo.com
Thu Jul 21 00:26:47 CET 2011


Okay, this configuration works on one node (I am waiting for a hardware problem to be fixed before testing with second node):

node cnode-1-3-5
node cnode-1-3-6
primitive glance-drbd ocf:linbit:drbd \
        params drbd_resource="glance-repos-drbd" \
        op start interval="0" timeout="240" \
        op stop interval="0" timeout="100" \
        op monitor interval="59s" role="Master" timeout="30s" \
        op monitor interval="61s" role="Slave" timeout="30s"
primitive glance-fs ocf:heartbeat:Filesystem \
        params device="/dev/drbd1" directory="/glance-mount" fstype="ext4" \
        op start interval="0" timeout="60" \
        op monitor interval="60" timeout="60" OCF_CHECK_LEVEL="20" \
        op stop interval="0" timeout="120"
primitive glance-ip ocf:heartbeat:IPaddr2 \
        params ip="10.4.0.25" nic="br100:1" \
        op monitor interval="5s"
primitive glance-repos ocf:heartbeat:LVM \
        params volgrpname="glance-repos" exclusive="true" \
        op start interval="0" timeout="30" \
         op stop interval="0" timeout="30"
group glance-repos-fs-group glance-fs glance-ip \
         meta target-role="Started"
ms ms_drbd glance-drbd \
        meta master-node-max="1" clone-max="2" clone-node-max="1" globally-unique="false" notify="true" target-role="Master"
colocation coloc-rule-w-master inf: ms_drbd:Master glance-repos-fs-group
colocation coloc-rule-w-master2 inf: glance-repos ms_drbd:Master
order glance-order-fs-after-drbd inf: glance-repos:start ms_drbd:start
order glance-order-fs-after-drbd-stop inf: glance-repos-fs-group:stop ms_drbd:demote
order glance-order-fs-after-drbd-stop2 inf: ms_drbd:demote ms_drbd:stop
order glance-order-fs-after-drbd-stop3 inf: ms_drbd:stop glance-repos:stop
order glance-order-fs-after-drbd2 inf: ms_drbd:start ms_drbd:promote
order glance-order-fs-after-drbd3 inf: ms_drbd:promote glance-repos-fs-group:start
property $id="cib-bootstrap-options" \
        dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="1" \
        stonith-enabled="false" \
        no-quorum-policy="ignore" \
        last-lrm-refresh="1310768814"

I will let everyone know how testing goes.


Thanks,

Bob

----- Forwarded Message -----
From: Bob Schatz <bschatz at yahoo.com>
To: "pacemaker at oss.clusterlabs.org" <pacemaker at oss.clusterlabs.org>
Sent: Wednesday, July 20, 2011 1:38 PM
Subject: [Pacemaker]  Fw:  Configuration for FS over DRBD over LVM


One correction:



I removed the "location" constraint and simply went with this:

      colocation coloc-rule-w-master inf: glance-repos ms_drbd:Master glance-repos-fs-group
      order glance-order-fs-after-drbd inf: glance-repos:start ms_drbd:promote glance-repos-fs-group:start
      order glance-order-fs-after-drbd2 inf: glance-repos-fs-group:stop ms_drbd:demote ms_drbd:stop glance-repos:stop

I called out the stop of DRBD before the stop of LVM.   The syslog attached previously is for this configuration.


Thanks,

Bob


________________________________
From: Bob Schatz <bschatz at yahoo.com>
To: "pacemaker at oss.clusterlabs.org" <pacemaker at oss.clusterlabs.org>
Sent: Wednesday, July 20, 2011 11:32 AM
Subject: [Pacemaker] Fw:  Configuration for FS over DRBD over LVM


I tried another test based on this thread:

http://www.gossamer-threads.com/lists/linuxha/pacemaker/65928?search_string=lvm%20drbd;#65928

I removed the "location" constraint and simply went with this:

        colocation coloc-rule-w-master inf: glance-repos ms_drbd:Master glance-repos-fs-group
        order glance-order-fs-after-drbd inf: glance-repos:start ms_drbd:promote glance-repos-fs-group:start
        order glance-order-fs-after-drbd2 inf: glance-repos-fs-group:stop ms_drbd:demote glance-repos:stop


The stop actions were called in this order:

stop file system
demote DRBD
stop LVM   *****
stop DRBD *****

instead of:

stop file system
demote DRBD
stop DRBD ******
stop LVM ******

I see these messages in the log which I believe are debug messages based on reading other threads:

        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-0-start-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-0-start-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-0-stop-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-0-stop-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-1-promote-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-1-promote-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-1-demote-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-1-demote-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-2-start-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-2-start-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-2-stop-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd-2-stop-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-0-stop-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-0-stop-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-0-start-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-0-start-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-1-demote-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-1-demote-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-1-promote-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-1-promote-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-2-stop-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-2-stop-end
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-2-start-begin
        pengine: [21021]: debug: text2task: Unsupported action: glance-order-fs-after-drbd2-2-start-end

I have attached a syslog-pacemaker log of the "/etc/init.d/corosync start" through "/etc/init.d/corosync stop" sequence.


Thanks,

Bob

----- Forwarded Message -----
From: Bob Schatz <bschatz at yahoo.com>
To: "pacemaker at oss.clusterlabs.org" <pacemaker at oss.clusterlabs.org>
Sent: Tuesday, July 19, 2011 4:38 PM
Subject: [Pacemaker] Configuration for FS over DRBD over LVM


Hi,

I am trying to configure an FS running on top of DRBD on top of LVM or:

     FS
     |
    DRBD
     |
    LVM

I am using Pacemaker 1.0.8, Ubuntu 10.04 and DRBD 8.3.7.

Reviewing all the manuals (Pacemaker Explained 1.0, DRBD 8.4 User Guide, etc) I came up with this Pacemaker configuration:

node cnode-1-3-5
node cnode-1-3-6
primitive glance-drbd ocf:linbit:drbd \
        params drbd_resource="glance-repos-drbd" \
        op start interval="0" timeout="240" \
        op stop interval="0" timeout="100" \
        op monitor interval="59s" role="Master" timeout="30s" \
        op monitor interval="61s" role="Slave" timeout="30s"
primitive glance-fs ocf:heartbeat:Filesystem \
        params device="/dev/drbd1" directory="/glance-mount" fstype="ext4" \
        op start interval="0" timeout="60" \
        op monitor interval="60" timeout="60" OCF_CHECK_LEVEL="20" \
        op stop interval="0" timeout="120"
primitive glance-repos ocf:heartbeat:LVM \
        params volgrpname="glance-repos" exclusive="true" \
        op start interval="0" timeout="30" \
        op stop interval="0" timeout="30"
group glance-repos-fs-group glance-fs
ms ms_drbd glance-drbd \
        meta master-node-max="1" clone-max="2" clone-node-max="1" globally-unique="false" notify="true" target-role="Master"
location drbd_on_node1 ms_drbd \
        rule $id="drbd_on_node1-rule" $role="Master" 100: #uname eq cnode-1-3-5
colocation coloc-rule-w-master inf: glance-repos ms_drbd:Master
order glance-order-fs-after-drbd inf: glance-repos:start ms_drbd:promote glance-repos-fs-group:start
property $id="cib-bootstrap-options" \
        dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="1" \
        stonith-enabled="false" \
        no-quorum-policy="ignore" \
        last-lrm-refresh="1310768814"

On one node, things come up cleanly.  In fact, debug messages in the agent show that the start() functions for the agent are called and exited in order (LVM start, DRBD start and Filesystem start).

The problem occurs when I do "/etc/init.d/corosync stop" on a single node.   What happens is that the stop() functions are called in this order:

1. LVM stop
2. Filesystem stop
3. DRBD stop

What I have tried:

1. I tried setting the score of the order to "500" assuming that this would mean the colocation rule would hit first.  Still the same problem.
2. I tried leaving off the ":start" and ":promote" options on the "order" line.   The stop order was still LVM, Filesystem, and DRBD
3. I tried adding another colocation rule "colocation coloc-rule-w-master2 inf: ms_drbd:Master glance-repos-fs-group" to tie glance-repos-fs-group to the same node as DRBD.   Stop still had the same issue.   I assume that I will still need this rule when I add a second node to the test.

Any suggestions would be appreciated.

A side note, the reason I have a group for the file system is that I would like to add an application and IP address to the group once I get this working.   Also, the reason I have LVM under DRBD is that I want to be able to grow the LVM volume as needed and then expand the DRBD volume.


Thanks in advance,

Bob
_______________________________________________
Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker



_______________________________________________
Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker

_______________________________________________
Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker





_______________________________________________
Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.clusterlabs.org/pipermail/pacemaker/attachments/20110720/24d33f04/attachment-0001.html>
-------------- next part --------------
_______________________________________________
Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


More information about the Pacemaker mailing list