[Pacemaker] How does fail over IP work

Carlos G Mendioroz tron at huapi.ba.ar
Thu Mar 24 05:30:41 EDT 2011


IPaddr2 (the RA) uses iproute2 to mantain a second address associated
to the interface.
You can tell by using "ip addr ls" to list all addresses.

-Carlos

Brent Bolin @ 24/03/2011 00:38 -0300 dixit:
> Have successfully setup centos 5.5 -
> 
> # crm status
> ============
> Last updated: Wed Mar 23 22:32:40 2011
> Stack: openais
> Current DC: cluster1 - partition with quorum
> Version: 1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3
> 2 Nodes configured, 2 expected votes
> 2 Resources configured.
> ============
> Online: [ cluster1 cluster2 ]
>  ClusterIP (ocf::heartbeat:IPaddr2): Started cluster1
>  WebSite (ocf::heartbeat:apache): Started cluster1
> 
> Everything appears to work fine when failing over.  However I don't
> understand why
> 
> crm configure show
> node cluster1 \
> attributes standby="off"
> node cluster2 \
> attributes standby="off"
> primitive ClusterIP ocf:heartbeat:IPaddr2 \
> params ip="192.168.0.155" cidr_netmask="32" \
> op monitor interval="30s"
> primitive WebSite ocf:heartbeat:apache \
> params configfile="/etc/httpd/conf/httpd.conf" \
> op monitor interval="1min"
> colocation website-with-ip inf: WebSite ClusterIP
> property $id="cib-bootstrap-options" \
> dc-version="1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3" \
> cluster-infrastructure="openais" \
> expected-quorum-votes="2" \
> stonith-enabled="false" \
> no-quorum-policy="ignore"
> rsc_defaults $id="rsc-options" \
> resource-stickiness="100"
> 
> cluster1 IP is 192.168.0.150
> cluster2 IP is 192.168.0.151
> 
> The cluster fail over IP is 192.168.0.155
> 
> I don't see any IP alias -
> 
> [root at cluster1 ~]# ifconfig
> eth0      Link encap:Ethernet  HWaddr 00:0C:29:8B:71:2C
>           inet addr:192.168.0.150  Bcast:192.168.0.255  Mask:255.255.255.0
>           inet6 addr: fe80::20c:29ff:fe8b:712c/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:20758 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:28449 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes:2319277 (2.2 MiB)  TX bytes:3419927 (3.2 MiB)
> 
> lo        Link encap:Local Loopback
>           inet addr:127.0.0.1  Mask:255.0.0.0
>           inet6 addr: ::1/128 Scope:Host
>           UP LOOPBACK RUNNING  MTU:16436  Metric:1
>           RX packets:584 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:584 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:307860 (300.6 KiB)  TX bytes:307860 (300.6 KiB)
> 
> [root at cluster1 ~]# ping 192.168.0.155
> PING 192.168.0.155 (192.168.0.155) 56(84) bytes of data.
> 64 bytes from 192.168.0.155: icmp_seq=1 ttl=64 time=0.021 ms
> 64 bytes from 192.168.0.155: icmp_seq=2 ttl=64 time=0.025 ms
> 
> [root at cluster1 ~]# ssh 192.168.0.155
> root at 192.168.0.155's password:
> Last login: Wed Mar 23 22:15:11 2011 from cluster2
> [root at cluster1 ~]#
> 
> 
> 
> How does this work?
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker

-- 
Carlos G Mendioroz  <tron at huapi.ba.ar>  LW7 EQI  Argentina




More information about the Pacemaker mailing list