[Pacemaker] vip can not back to active/active status after failover

Andrew Beekhof andrew at beekhof.net
Tue Aug 27 01:54:57 UTC 2013


On 20/08/2013, at 9:48 PM, WenWen <deutschland.gray at gmail.com> wrote:

> Hi, 
> I am doing active/active cluster with drbd pacemaker and apache.
> when i start cman and pacemaker service on both nodes at the first time
> everything is ok.
> 
> then I set node1 to standby then online again.
> The apache and VIP can not back to active/active again.
> 
> vip and apache service only run on one node.
> Becoz we set the colocation to the vip and apache resource.
> 
> is it the design?

Not intentionally.
Can you send us the output of 'cibadmin -Ql' when the cluster is in the second state and I can check if newer versions behave any better?

> 
> hope somebody can help me.
> Here is my configuration file.
> 
> ----on node1-----
> Online: [ node1.test.com node2.test.com ]
> 
> Master/Slave Set: WebDataClone [WebData]
>     Masters: [ node1.test.com node2.test.com ]
> Clone Set: WebFSClone [WebFS]
>     Started: [ node1.test.com node2.test.com ]
> Clone Set: WebSiteClone [WebSite]
>     Started: [ node2.test.com ]
>     Stopped: [ WebSite:1 ]
> Clone Set: WebIP [ClusterIP] (unique)
>     ClusterIP:0        (ocf::heartbeat:IPaddr2):       Started node2.test.com
>     ClusterIP:1        (ocf::heartbeat:IPaddr2):       Started node2.test.com
> stonith_fence_virsh_node1      (stonith:fence_virsh):  Started node2.test.com
> stonith_fence_virsh_node2      (stonith:fence_virsh):  Started node1.test.com
> ------------------------------------
> -----------on node2 -----------------------
> Online: [ node1.test.com node2.test.com ]
> 
> Master/Slave Set: WebDataClone [WebData]
>     Masters: [ node1.test.com node2.test.com ]
> Clone Set: WebFSClone [WebFS]
>     Started: [ node1.test.com node2.test.com ]
> Clone Set: WebSiteClone [WebSite]
>     Started: [ node2.test.com ]
>     Stopped: [ WebSite:1 ]
> Clone Set: WebIP [ClusterIP] (unique)
>     ClusterIP:0        (ocf::heartbeat:IPaddr2):       Started node2.test.com
>     ClusterIP:1        (ocf::heartbeat:IPaddr2):       Started node2.test.com
> stonith_fence_virsh_node1      (stonith:fence_virsh):  Started node2.test.com
> stonith_fence_virsh_node2      (stonith:fence_virsh):  Started node1.test.com
> [root at node2 htdocs]#
> 
> -----------on node2 -----------------------
> 
> ------- configuration ----
> [root at node1 ~]# crm configure show
> node node1.test.com \
>        attributes standby="off"
> node node2.test.com \
>        attributes standby="off"
> primitive ClusterIP ocf:heartbeat:IPaddr2 \
>        params ip="192.168.119.143" cidr_netmask="24"
> clusterip_hash="sourceip" \
>        op monitor interval="30s"
> primitive WebData ocf:linbit:drbd \
>        params drbd_resource="wwwdata" \
>        op start interval="0" timeout="240" \
>        op stop interval="0" timeout="100" \
>        op monitor interval="29s" role="Master" \
>        op monitor interval="31s" role="Slave"
> primitive WebFS ocf:heartbeat:Filesystem \
>        params device="/dev/drbd0" directory="/web" fstype="gfs2" \
>        op start interval="0" timeout="60" \
>        op stop interval="0" timeout="60" \
>        op monitor interval="60" timeout="40"
> primitive WebSite lsb:httpd \
>        op start interval="0" timeout="30" \
>        op stop interval="0" timeout="30" \
>        op monitor interval="30" timeout="20"
> primitive stonith_fence_virsh_node1 stonith:fence_virsh \
>        params action="reboot" ipaddr="192.168.119.141" login="root"
> identity_file="/root/.ssh/id_rsa" port="node1.test.com"
> primitive stonith_fence_virsh_node2 stonith:fence_virsh \
>        params action="reboot" ipaddr="192.168.119.142" login="root"
> identity_file="/root/.ssh/id_rsa" port="node2.test.com"
> ms WebDataClone WebData \
>        meta master-max="2" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true"
> clone WebFSClone WebFS \
>        meta interleave="true"
> clone WebIP ClusterIP \
>        meta globally-unique="true" clone-max="2" clone-node-max="2"
> clone WebSiteClone WebSite
> location l_stonith_fence_virsh_node1_noton_node1 stonith_fence_virsh_node1
> -inf: node1.test.com
> location l_stonith_fence_virsh_node2_noton_node2 stonith_fence_virsh_node2
> -inf: node2.test.com
> colocation WebSite-with-WebFS inf: WebSiteClone WebFSClone
> colocation fs_on_drbd inf: WebFSClone WebDataClone:Master
> colocation website-with-ip inf: WebSiteClone WebIP
> order WebFS-after-WebData inf: WebDataClone:promote WebFSClone:start
> order WebSite-after-WebFS inf: WebFSClone WebSiteClone
> order apache-after-vip inf: WebIP WebSiteClone
> property $id="cib-bootstrap-options" \
>        dc-version="1.1.8-7.el6-394e906" \
> ------------------------------------------------
> 
> ----/etc/drbd.d/global_common.conf------------
> 
> global {
>        usage-count yes;
> }
> 
> common {
>        handlers {
>        }
> 
>        startup {
>            become-primary-on both;
>        }
> 
>        options {
>        }
> 
>        disk {
>            fencing resource-and-stonith;
>        }
> 
>        net {
>            allow-two-primaries;
>            after-sb-0pri discard-zero-changes;
>            after-sb-1pri discard-secondary;
>            after-sb-2pri disconnect;
>            protocol C;
>        }
> }
> -----------------------------
> ----drbd resource----
> resource wwwdata {
>  device    /dev/drbd0;
>  disk      /dev/vg02/webdata;
>  meta-disk internal;
>  on node1.test.com {
>    address   192.168.119.141:7789;
>  }
>  on node2.test.com {
>    address   192.168.119.142:7789;
>  }
> }
> ---------
> 
> 
> 
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20130827/2c17f13e/attachment-0009.sig>


More information about the Pacemaker mailing list