[Pacemaker] Problem with pingd.

Jayakrishnan jayakrishnanlll at gmail.com
Mon Feb 22 17:46:49 UTC 2010


Sir,
I have setup a 2 node cluster with heartbeat 2.99  pacemaker 1.05. I am
using Ubuntu 9.1. Both the packages are installed from ubuntu karmic
repository.
My packages are:

heartbeat                   2.99.2+sles11r9-5ubuntu1
heartbeat-common                     2.99.2+sles11r9-5ubuntu1
heartbeat-common-dev                 2.99.2+sles11r9-5ubuntu1
heartbeat-dev                        2.99.2+sles11r9-5ubuntu1
libheartbeat2                        2.99.2+sles11r9-5ubuntu1
libheartbeat2-dev                    2.99.2+sles11r9-5ubuntu1
pacemaker-heartbeat                  1.0.5+hg20090813-0ubuntu4
pacemaker-heartbeat-dev              1.0.5+hg20090813-0ubuntu4


My ha.cf file, crm configuration are all attached in the mail.

I am making a postgres database cluster with slony replication. eth1 is my
heartbeat link, a cross over cable is connected between the servers in eth1.
eth0 is my external network where my cluster IP get assigned.
server1--> hostname node1
node 1 192.168.10.129 eth1
192.168.1.1-->eth0


servver2 --> hostname node2
node2  192.168.10.130 eth1
192.168.1.2 --> eth0

Now when I pull out my eth1 cable, I need to make a failover to the other
node. For that i have configured pingd as follows. But it is not working. My
resources are not at all starting when I give rule as
rule -inf: not_defined pingd or pingd lte0

I tried changing the -inf: to inf: then the resources got started but
resource failover is not taking place when i pull out the eth1 cable.

Please check my configuration and kindly point out where I am missing.
PLease see that I am using default resource stickness as INFINITY which is
compulsory for slony replication.

MY ha.cf file
------------------------------------------------------------------

autojoin none
keepalive 2
deadtime 15
warntime 10
initdead 64
initdead 64
bcast eth1
auto_failback off
node node1
node node2
crm respawn
use_logd yes
____________________________________________

My crm configuration

node $id="3952b93e-786c-47d4-8c2f-a882e3d3d105" node2 \
        attributes standby="off"
node $id="ac87f697-5b44-4720-a8af-12a6f2295930" node1 \
        attributes standby="off"
primitive pgsql lsb:postgresql-8.4 \
        meta target-role="Started" resource-stickness="inherited" \
        op monitor interval="15s" timeout="25s" on-fail="standby"
primitive pingd ocf:pacemaker:pingd \
        params name="pingd" hostlist="192.168.10.1 192.168.10.75" \
        op monitor interval="15s" timeout="5s"
primitive slony-fail lsb:slony_failover \
        meta target-role="Started"
primitive slony-fail2 lsb:slony_failover2 \
        meta target-role="Started"
primitive vir-ip ocf:heartbeat:IPaddr2 \
        params ip="192.168.10.10" nic="eth0" cidr_netmask="24"
broadcast="192.168.10.255" \
        op monitor interval="15s" timeout="25s" on-fail="standby" \
        meta target-role="Started"
clone pgclone pgsql \
        meta notify="true" globally-unique="false" interleave="true"
target-role="Started"
clone pingclone pingd \
        meta globally-unique="false" clone-max="2" clone-node-max="1"
location vir-ip-with-pingd vir-ip \
        rule $id="vir-ip-with-pingd-rule" inf: not_defined pingd or pingd
lte 0
meta globally-unique="false" clone-max="2" clone-node-max="1"
colocation ip-with-slony inf: slony-fail vir-ip
colocation ip-with-slony2 inf: slony-fail2 vir-ip
order ip-b4-slony2 inf: vir-ip slony-fail2
order slony-b4-ip inf: vir-ip slony-fail
property $id="cib-bootstrap-options" \
        dc-version="1.0.5-3840e6b5a305ccb803d29b468556739e75532d56" \
        cluster-infrastructure="Heartbeat" \
        no-quorum-policy="ignore" \
        stonith-enabled="false" \
        last-lrm-refresh="1266851027"
rsc_defaults $id="rsc-options" \
        resource-stickiness="INFINITY"

_____________________________________

My crm status:
__________________________

crm(live)# status


============
Last updated: Mon Feb 22 23:15:56 2010
Stack: Heartbeat
Current DC: node2 (3952b93e-786c-47d4-8c2f-a882e3d3d105) - partition with
quorum
Version: 1.0.5-3840e6b5a305ccb803d29b468556739e75532d56
2 Nodes configured, unknown expected votes
5 Resources configured.
============

Online: [ node2 node1 ]

Clone Set: pgclone
    Started: [ node1 node2 ]
Clone Set: pingclone
    Started: [ node2 node1 ]

============================

please help me out.
--
Regards,

Jayakrishnan. L

Visit: www.jayakrishnan.bravehost.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100222/2f8ada8d/attachment-0001.html>


More information about the Pacemaker mailing list