[Pacemaker] Route OCF RA and Failover IP
Billy Guthrie
bguthrie at billyguthrie.com
Sat Nov 28 21:00:10 UTC 2009
list,
I am trying to add a route with Route OCF RA; The resource starts fine on the active node. However, on the active node, the resource
states that it is not installed.
============
Last updated: Sat Nov 28 15:43:26 2009
Stack: openais
Current DC: node1 - partition with quorum
Version: 1.0.6-cebe2b6ff49b36b29a3bd7ada1c4701c7470febe
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Online: [ node1 node2 ]
Master/Slave Set: ms_drbd_pbx
Masters: [ node1 ]
Slaves: [ node2 ]
Resource Group: pbx
fs_pbx (ocf::heartbeat:Filesystem): Started node1
ip_pbx (ocf::heartbeat:IPaddr): Started node1
pbxd (lsb:pbx): Started node
pingd (ocf::pacemaker:pingd): Started node1
gwsrc_route (ocf::heartbeat:Route): Started node1
Failed actions:
gwsrc_route_monitor_0 (node=telego5, call=106, rc=5, status=complete): not installed
node1:~# crm configure show
node node1 \
attributes standby="off"
node node1 \
attributes standby="off"
primitive drbd_pbx ocf:linbit:drbd \
params drbd_resource="r0" \
meta target-role="Started"
primitive fs_pbx ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/usr/local/pbx" fstype="ext3"
primitive gwsrc_route ocf:heartbeat:Route \
params destination="0.0.0.0/0" gateway="24.10.10.113" source="24.10.10.118" \
meta target-role="Started"
primitive ip_pbx ocf:heartbeat:IPaddr \
params ip="24.10.10.118" \
op monitor interval="10s"
primitive pbxd lsb:pbx
primitive pingd ocf:pacemaker:pingd \
params host_list="24.10.10.113" multiplier="100" \
op monitor interval="15s" timeout="5s"
group pbx fs_pbx ip_pbx pbxd pingd gwsrc_route
ms ms_drbd_pbx drbd_pbx \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
location cli-prefer-ip_pbx ip_pbx \
rule $id="cli-prefer-rule-ip_pbx" inf: #uname eq node1
colocation pbx_on_drbd inf: pbx ms_drbd_pbx:Master
order pbx_after_drbd inf: ms_drbd_pbx:promote pbx:start
node1:~# ip route
24.10.10.112/28 dev eth0 proto kernel scope link src 24.10.10.116
10.137.136.0/24 dev eth1 proto kernel scope link src 10.137.136.116
default via 174.137.136.113 dev eth0 src 24.10.10.118
default via 174.137.136.113 dev eth0 metric 5
The route is being added:
default via 174.137.136.113 dev eth0 src 24.10.10.118
because 24.10.10.118 exists:
node1:~# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:29:48:30:e8:e0
inet addr:24.10.10.118 Bcast:24.10.10.127 Mask:255.255.255.240
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
On node2, eth0:0 obviously does not exist, so I think that is why it is erroring out:
gwsrc_route_monitor_0 (node=telego5, call=106, rc=5, status=complete): not installed
This is causing issues when I failover, is there anyway not to run gwsrc_route_monitor_0 on node2 when node1
is active; The resource does need to start on node2 when it fails over; It appears that it is validating as the 24.10.10.118 does not exist on node2?
I need all my traffic sourced from 24.10.10.118 and not of the physical interface eth0 which is 24.10.10.116
There are 2 static routes when a node is active/primary, when eth0:0 is removed and started on the standby/secondary
node, the default via 174.137.136.113 dev eth0 src 24.10.10.118 will be removed and the second route with a metric of 5
will handle direct the traffic when the route of metric 0 is removed.
Thanks for your time.
Billy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20091128/d753dc78/attachment-0002.html>
More information about the Pacemaker
mailing list