[Pacemaker] pingd problems
Dejan Muhamedagic
dejanmm at fastmail.fm
Tue Jun 8 17:08:35 UTC 2010
Hi,
On Tue, Jun 08, 2010 at 06:43:11PM +0200, Dalibor Dukic wrote:
> On Sat, 2010-06-05 at 15:36 +0200, Dalibor Dukic wrote:
> > I have problem with ping RA not correctly updating CIB with appropriate
> > attributes when doing fresh start. So afterwards IPaddr2 resources wont
> > start.
>
> Have anyone had chance to get a peek at this?
>
> My setup consists from two nodes doing ACTIVE/ACTIVE solution for IVR
> yate service (heartbeat with pacemaker 1.0.8).
>
> I have ping monitor to default gateway which is cloned between nodes and
> every node have self floating address (VIP1 and VIP2).
>
> If one node fails or ping monitor fails other node takes floating
> address.
>
> primitive L3_ping ocf:pacemaker:ping \
> params host_list="10.63.97.25" multiplier="100" \
> op monitor interval="10s" timeout="5s" on-fail="standby"
This timeout may be too short.
> clone L3_ping_clone L3_ping \
> meta globally-unique="false" target-role="Started"
> primitive VIP1 ocf:heartbeat:IPaddr2 \
> params ip="10.63.97.28" \
> op monitor interval="15s"
> primitive VIP2 ocf:heartbeat:IPaddr2 \
> params ip="10.63.97.29" \
> op monitor interval="15s"
>
> Also yate LSB service is cloned between nodes.
>
> primitive Yate lsb:yate \
> op monitor on-fail="standby" interval="15s"
> clone Yate_clone Yate \
> meta globally-unique="false" target-role="Started"
>
>
> So, problem lies in location constraints:
>
> location LOC_VIP1 VIP1 \
> rule $id="LOC_VIP1-rule" 100: #uname eq 7it-ivr-1
> location LOC_VIP1_CONNECTED VIP1 \
> rule $id="LOC_VIP1_CONNECTED-rule" -inf: not_defined L3_ping or L3_ping
> number:lte 0
> location LOC_VIP2 VIP2 \
> rule $id="LOC_VIP2-rule" 100: #uname eq 7it-ivr-2
> location LOC_VIP2_CONNECTED VIP2 \
> rule $id="LOC_VIP2_CONNECTED-rule" -inf: not_defined L3_ping or L3_ping
> number:lte 0
Not sure, but I think that the default for the attribute name is
"pingd". Try changing L3_ping to pingd in the constraints.
Thanks,
Dejan
> After configuring location constraint to tell the cluster to only run
> the floating address on a node with a working network connection to the
> default gateway my IPaddr resource won't even start on every cluster
> node.
>
> Node 7it-ivr-1 (5f783be2-eff1-4db7-9b94-0b13a4670bb4): online
> Yate:0 (lsb:yate) Started
> L3_ping:0 (ocf::pacemaker:ping) Started
> Node 7it-ivr-2 (e8b2c3be-1d32-43d8-9876-d73642693ccf): online
> L3_ping:1 (ocf::pacemaker:ping) Started
> Yate:1 (lsb:yate) Started
>
>
> Current allocation scores:
>
> root at 7it-ivr-1:~# ptest -sL
> Allocation scores:
> native_color: VIP2 allocation score on 7it-ivr-1: -1000000
> native_color: VIP2 allocation score on 7it-ivr-2: -1000000
> clone_color: Yate_clone allocation score on 7it-ivr-1: 0
> clone_color: Yate_clone allocation score on 7it-ivr-2: 0
> clone_color: Yate:0 allocation score on 7it-ivr-1: 1
> clone_color: Yate:0 allocation score on 7it-ivr-2: 0
> clone_color: Yate:1 allocation score on 7it-ivr-1: 0
> clone_color: Yate:1 allocation score on 7it-ivr-2: 1
> native_color: Yate:0 allocation score on 7it-ivr-1: 1
> native_color: Yate:0 allocation score on 7it-ivr-2: 0
> native_color: Yate:1 allocation score on 7it-ivr-1: -1000000
> native_color: Yate:1 allocation score on 7it-ivr-2: 1
> native_color: VIP1 allocation score on 7it-ivr-1: -1000000
> native_color: VIP1 allocation score on 7it-ivr-2: -1000000
> clone_color: L3_ping_clone allocation score on 7it-ivr-1: 0
> clone_color: L3_ping_clone allocation score on 7it-ivr-2: 0
> clone_color: L3_ping:0 allocation score on 7it-ivr-1: 1
> clone_color: L3_ping:0 allocation score on 7it-ivr-2: 0
> clone_color: L3_ping:1 allocation score on 7it-ivr-1: 0
> clone_color: L3_ping:1 allocation score on 7it-ivr-2: 1
> native_color: L3_ping:0 allocation score on 7it-ivr-1: 1
> native_color: L3_ping:0 allocation score on 7it-ivr-2: 0
> native_color: L3_ping:1 allocation score on 7it-ivr-1: -1000000
> native_color: L3_ping:1 allocation score on 7it-ivr-2: 1
>
>
> >From this output I can see that VIP1 and VIP2 resource is not started
> because of wrong scores.
>
> native_color: VIP1 allocation score on 7it-ivr-1: -1000000
> native_color: VIP2 allocation score on 7it-ivr-2: -1000000
>
> It is very annoying to have cluster but can't use ping monitor to test
> default gateway reachability.
>
> I would kindly appreciate if someone could help me with resolving
> described problem.
>
> best regards, Dalibor
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
More information about the Pacemaker
mailing list