[Pacemaker] clone ip definition and location stops my resources...

Gianluca Cecchi gianluca.cecchi at gmail.com
Tue May 11 06:49:19 EDT 2010


On Tue, May 11, 2010 at 11:58 AM, Dejan Muhamedagic <dejanmm at fastmail.fm>
 wrote:

> Do you see the attribute set in the status section (cibadmin -Ql
> | grep -w pingd)? If not, then the problem is with the resource.


 [root at ha1 ~]# cibadmin -Ql | grep -w pingd
          <expression attribute="pingd"
id="nfs-group-with-pinggw-expression" operation="not_defined"/>
          <expression attribute="pingd"
id="nfs-group-with-pinggw-expression-0" operation="lte" value="0"/>
          <nvpair id="status-ha1-pingd" name="pingd" value="100"/>
          <nvpair id="status-ha2-pingd" name="pingd" value="100"/>


Tried to change from pacemaker:ping RA to pacemaker:pingd RA
(even if I read that the former should be preferred)
while still the iptables rule is in place and prevents ha1 to reach the gw

[root at ha1 ~]# crm resource stop cl-pinggw
--> services go down (OK, expected)

[root at ha1 ~]# crm configure delete nfs-group-with-pinggw
[root at ha1 ~]# crm configure delete cl-pinggw
[root at ha1 ~]# crm resource delete nfs-group-with-pinggw
-->services restart

[root at ha1 ~]# crm resource stop pinggw
[root at ha1 ~]# crm configure delete pinggw
[root at ha1 ~]# crm configure primitive pinggw ocf:pacemaker:pingd \
> params host_list="192.168.101.1" multiplier="100" \
> op start interval="0" timeout="90" \
> op stop interval="0" timeout="100"
[root at ha1 ~]# crm configure clone cl-pinggw pinggw meta
globally-unique="false"

Now I correctly have:

Migration summary:
* Node ha1:  pingd=0
* Node ha2:  pingd=100

[root at ha1 ~]# crm configure location nfs-group-with-pinggw nfs-group rule
-inf: not_defined pinggw or pinggw lte 0

stop of all

But this is another problem I'm trying to solve
(it seems to me that having a group where in the order there is before an
IPaddr2 resource and then a linbit:drbd reosurce, then I don't get the
failover.... in the sense that the node ha1 remains drbd primary and there
is no demote/promote....
I will eventually post in separate e-mail)

It seems from this test that the pacemaker:ping RA doesn't work for me.... I
will stay at pingd for the moment.


> > Probably I didn't understand correctly what described at the link:
> >
> http://www.clusterlabs.org/wiki/Pingd_with_resources_on_different_networks
>  [1]
> > or it is outdated now... and instead of defining two clones it is better
> > (aka works) to populate the host_list parameter as described here in case
> of
> > more networks connected:
> >
> >
> http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch09s03s03.html
>  [2]
>
> The former is when you need to test connectivity on different
> networks. I don't know if you need that.
>
>
Ok. In [1] above, it makes sense if I have different resources bound to
different networks and I want to prevent the loss on a network to cause
unnecessary failover of the other defined resource...
Put a case where for some reason I have a single resource that depends on
two networks, I can instead simply use [2]  with only one clone resource and
an extended host_list...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100511/94336883/attachment-0001.html>


More information about the Pacemaker mailing list