[Pacemaker] VIP on Active/Active cluster
Jake Smith
jsmith at argotec.com
Mon May 14 16:28:52 UTC 2012
----- Original Message -----
> From: "Paul Damken" <zen.suite at gmail.com>
> To: pacemaker at clusterlabs.org
> Sent: Monday, May 14, 2012 9:45:30 AM
> Subject: Re: [Pacemaker] VIP on Active/Active cluster
>
> Jake Smith <jsmith at ...> writes:
>
> >
> >
> > clone-node-max="2" should only be one. How about the output from
> > crm_mon -
> fr1And ip a s on each node? Jake
> > ----- Reply message -----From: "Paul Damken" <zen.suite <at>
> > gmail.com>To:
> <pacemaker <at> oss.clusterlabs.org>Subject: [Pacemaker] VIP on
> Active/Active
> clusterDate: Sat, May 12, 2012 2:49 pm
> >
> >
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at ...
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started:
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
>
> Jake, Thanks here is the whole info. Same behavior. VIP not pingable
> nor
> reachable.
>
> Do you think that Share VIP should work on SLES 11 SP1 HAE?
> I cannot get this VIP to work.
I use Ubuntu so I can't say 100% but I would expect so... I use it successfully in my cluster so I know it *can* work in general.
Your cidr_netmask looks odd to me given the broadcast address... should it be 24 or 23 not 22?
>
> Resources:
>
> primitive ip_vip ocf:heartbeat:IPaddr2 \
> params ip="192.168.1.100" nic="bond0" cidr_netmask="22"
> broadcast="192.168.1.255" clusterip_hash="sourceip-sourceport"
> iflabel="VIP1" \
> op start interval="0" timeout="20" \
> op stop interval="0" timeout="20" \
> op monitor interval="10" timeout="20" start-delay="0"
>
> clone cl_vip ip_vip \
> meta interleave="true" globally-unique="true" clone-max="2"
> clone-node-
> max="1" target-role="Started" is-managed="true"
Don't need any of these parameters really... just clone cl_vip ip_vip and nothing else. global-unique could be part of the problem too
interleave is default false if not defined and pretty sure you want it false
globally-unique is default false and should not be true for your use case
clone-max defaults to the number of nodes in cluster so if you have 2 nodes you get 2 clones
clone-node-max defaults to 1
target-role and is-managed are auto-generated when you did certain cluster actions and are fine as is or removed
>
> crm_mon:
>
> ============
> Last updated: Mon May 14 08:27:50 2012
> Stack: openais
> Current DC: hanode1 - partition with quorum
> Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
> 2 Nodes configured, 2 expected votes
> 37 Resources configured.
> ============
>
> Online: [ hanode2 hanode1 ]
>
> Full list of resources:
>
> cluster_mon (ocf::pacemaker:ClusterMon): Started hanode1
> Clone Set: HASI [HASI_grp]
> Started: [ hanode2 hanode1 ]
> hanode1-stonith (stonith:external/ipmi-operator): Started
> hanode2
> hanode2-stonith (stonith:external/ipmi-operator): Started
> hanode1
> vghanode1 (ocf::heartbeat:LVM): Started hanode1
> vghanode2 (ocf::heartbeat:LVM): Started hanode2
> Clone Set: ora [ora_grp]
> Started: [ hanode2 hanode1 ]
> Clone Set: cl_vip [ip_vip] (unique)
> ip_vip:0 (ocf::heartbeat:IPaddr2): Started hanode2
> ip_vip:1 (ocf::heartbeat:IPaddr2): Started hanode1
>
should not be (unique) as I stated above
>
>
> hanode1:~ # ip a s
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
> inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state
> UP
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> inet 192.168.1.58/22 brd 192.168.1.255 scope global bond0
> inet 192.168.1.100/22 brd 192.168.1.255 scope global secondary
> bond0:VIP1
> inet6 fe80::9e8e:99ff:fe24:72a0/64 scope link
> valid_lft forever preferred_lft forever
>
I would try the changes above to the clone and (possibly) the netmasks.
Then if it's still not pingable I would stop any firewall on the servers temporarily and test just to rule the firewall out.
If that doesn't work how about output from "crm_mon -fr1" "crm configure show". And "ip a s" from each node
HTH
Jake
More information about the Pacemaker
mailing list