[Pacemaker] Resources move on Pacemaker + Corosync cluster with set stickiness
Andrew Beekhof
andrew at beekhof.net
Tue May 27 21:13:05 UTC 2014
On 27 May 2014, at 8:23 pm, Danilo Malcangio <d.malcangio at eletech.it> wrote:
> I've removed the location constraint and it seems the resources don't move anymore if I reboot BX-1.
> During reboot I noticed on crm_mon that resources for one second appeared offline and then they stayed on BX-2. Does anyone know why that happened?
This could just be an artefact of the status section being refreshed - which happens as a delete + repopulate.
On the other hand, it could also be us recovering from a situation where the resource was active on both nodes.
grep for pengine in the logs - that will tell you if anything was moved/restarted.
>
> I've tried reconfiguring my cluster following this official guide (http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html/Clusters_from_Scratch/index.html) 5. Creating an Active/Passive Cluster
> but I still get the same problem, at reboot nodes go back to the preferred node BX-1 that is exactly the contrary of what is stated in the guide.
>
> Still wondering how the location constraint and stickiness work.
>
>> Try crm_mon -o and look for monitor operations that return 0 instead of 7
>>
>
> I am sorry Andrew, I've tried to follow your advice but didn't quite catch what you want me to look for :(
Failing monitor actions that indicate the resource was already started without pacemaker telling it to do so.
>
>
> Il 23/05/2014 2.42, Andrew Beekhof ha scritto:
>> On 22 May 2014, at 9:00 pm, Danilo Malcangio <d.malcangio at eletech.it>
>> wrote:
>>
>>
>>> Hi Andrew, first of all thanks for answering.
>>>
>>>
>>>> Almost certainly the node is configured to start those resources at bootup.
>>>> Don't do that :)
>>>>
>>>>
>>> Are you advicing me to delete the location constraint? (location prefer-et-ipbx-1 cluster-group 100: BX-1)
>>> Or is it something else that starts the resources on BX-1 Node??
>>>
>> By the looks of it, yes.
>> Try crm_mon -o and look for monitor operations that return 0 instead of 7
>>
>>
>>> I've removed all the scripts from the startup sequence with "update-rc.d -f SCRIPT_NAME remove"
>>>
>>> Thank you
>>>
>>> Il 22/05/2014 12.25, Andrew Beekhof ha scritto:
>>>
>>>> On 22 May 2014, at 5:31 pm, Danilo Malcangio <d.malcangio at eletech.it>
>>>>
>>>> wrote:
>>>>
>>>>
>>>>
>>>>> Hi everyone,
>>>>> I've created an active/passive 2 node cluster following the documentation on clusterlabs.
>>>>> My cluster has the following characteristics
>>>>> Debian Wheezy 7.2.0
>>>>> Pacemaker 1.1.7
>>>>> Corosync 1.4.2
>>>>>
>>>>> I've made it with the following configuration
>>>>>
>>>>> node BX-1
>>>>> node BX-2
>>>>> primitive cluster-apache2 ocf:heartbeat:apache \
>>>>> params configfile="/etc/apache2/apache2.conf" httpd="/usr/sbin/apache2" port="80" \
>>>>> op monitor interval="10s" timeout="60s" \
>>>>> op start interval="0" timeout="40s" \
>>>>> op stop interval="0" timeout="60s"
>>>>> primitive cluster-asterisk lsb:asterisk \
>>>>> op monitor interval="30" \
>>>>> op start interval="0" timeout="120s" \
>>>>> op stop interval="0" timeout="120s"
>>>>> primitive cluster-ip ocf:heartbeat:IPaddr2 \
>>>>> params ip="10.2.30.10" cidr_netmask="20" \
>>>>> op monitor interval="10s" timeout="20s" \
>>>>> op start interval="0" timeout="20s" \
>>>>> op stop interval="0" timeout="20s"
>>>>> primitive cluster-ntp lsb:ntp \
>>>>> op monitor interval="30" \
>>>>> op start interval="0" timeout="120s" \
>>>>> op stop interval="0" timeout="120s"
>>>>> primitive cluster-tftp lsb:tftpd-hpa \
>>>>> op monitor interval="30" \
>>>>> op start interval="0" timeout="120s" \
>>>>> op stop interval="0" timeout="120s"
>>>>> group cluster-group cluster-ip cluster-asterisk cluster-apache2 cluster-tftp cluster-ntp \
>>>>> meta resource-stickiness="101"
>>>>> location prefer-et-ipbx-1 cluster-group 100: BX-1
>>>>> colocation cluster-dependency inf: cluster-ip cluster-asterisk cluster-apache2 cluster-tftp cluster-ntp
>>>>> property $id="cib-bootstrap-options" \
>>>>> dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
>>>>> cluster-infrastructure="openais" \
>>>>> expected-quorum-votes="2" \
>>>>> stonith-enabled="false" \
>>>>> no-quorum-policy="ignore" \
>>>>> default-resource-stickiness="1"
>>>>> rsc_defaults $id="rsc-options" \
>>>>> failure-timeout="60s"
>>>>>
>>>>> I've set location constraint to have BX-1 as the preferred node at cluster startup, and group stickiness at 101 to avoid moving resources when the master node comes back up (following this guide
>>>>>
>>>>> http://foaa.de/old-blog/2010/10/intro-to-pacemaker-part-2-advanced-topics/trackback/index.html
>>>>>
>>>>> ).
>>>>>
>>>>> I've got the following problem: resources move when I reboot nodes.
>>>>>
>>>>> If i stop corosync on BX-1 resources move to BX-2 and when i restart corosync on BX-1 they stay on BX-2 (as I expected).
>>>>> But all this doesn't happen with reboot (when I reboot BX-1).
>>>>>
>>>>>
>>>> Almost certainly the node is configured to start those resources at bootup.
>>>> Don't do that :)
>>>>
>>>>
>>>>
>>>>> After the reboot of BX-1 resources move to BX-2 and when BX-1 comes back up the resources move back to BX-1.
>>>>>
>>>>> What am I missing in the configuration??
>>>>>
>>>>> Thank you very much for the support
>>>>>
>>>>>
>>>>>
>>>>> P.s. I attach also the allocation scores obtained with ptest -sL
>>>>>
>>>>> BX-1 has resources
>>>>>
>>>>> Allocation scores:
>>>>> group_color: et-cluster allocation score on ET-IPBX-1: 100
>>>>> group_color: et-cluster allocation score on ET-IPBX-2: 0
>>>>> group_color: cluster-ip allocation score on ET-IPBX-1: 201
>>>>> group_color: cluster-ip allocation score on ET-IPBX-2: 0
>>>>> group_color: cluster-asterisk allocation score on ET-IPBX-1: 101
>>>>> group_color: cluster-asterisk allocation score on ET-IPBX-2: 0
>>>>> group_color: cluster-apache2 allocation score on ET-IPBX-1: 101
>>>>> group_color: cluster-apache2 allocation score on ET-IPBX-2: 0
>>>>> group_color: cluster-tftp allocation score on ET-IPBX-1: 101
>>>>> group_color: cluster-tftp allocation score on ET-IPBX-2: 0
>>>>> group_color: cluster-ntp allocation score on ET-IPBX-1: 101
>>>>> group_color: cluster-ntp allocation score on ET-IPBX-2: 0
>>>>> native_color: cluster-ip allocation score on ET-IPBX-1: 3231
>>>>> native_color: cluster-ip allocation score on ET-IPBX-2: 0
>>>>> native_color: cluster-asterisk allocation score on ET-IPBX-1: 1515
>>>>> native_color: cluster-asterisk allocation score on ET-IPBX-2: -INFINITY
>>>>> native_color: cluster-apache2 allocation score on ET-IPBX-1: 707
>>>>> native_color: cluster-apache2 allocation score on ET-IPBX-2: -INFINITY
>>>>> native_color: cluster-tftp allocation score on ET-IPBX-1: 303
>>>>> native_color: cluster-tftp allocation score on ET-IPBX-2: -INFINITY
>>>>> native_color: cluster-ntp allocation score on ET-IPBX-1: 101
>>>>> native_color: cluster-ntp allocation score on ET-IPBX-2: -INFINITY
>>>>>
>>>>> BX-2 has resources (while BX-1 reboot)
>>>>>
>>>>> Allocation scores:
>>>>> group_color: et-cluster allocation score on ET-IPBX-1: -INFINITY
>>>>> group_color: et-cluster allocation score on ET-IPBX-2: 0
>>>>> group_color: cluster-ip allocation score on ET-IPBX-1: -INFINITY
>>>>> group_color: cluster-ip allocation score on ET-IPBX-2: 101
>>>>> group_color: cluster-asterisk allocation score on ET-IPBX-1: 0
>>>>> group_color: cluster-asterisk allocation score on ET-IPBX-2: 101
>>>>> group_color: cluster-apache2 allocation score on ET-IPBX-1: 0
>>>>> group_color: cluster-apache2 allocation score on ET-IPBX-2: 101
>>>>> group_color: cluster-tftp allocation score on ET-IPBX-1: 0
>>>>> group_color: cluster-tftp allocation score on ET-IPBX-2: 101
>>>>> group_color: cluster-ntp allocation score on ET-IPBX-1: 0
>>>>> group_color: cluster-ntp allocation score on ET-IPBX-2: 101
>>>>> native_color: cluster-ip allocation score on ET-IPBX-1: -INFINITY
>>>>> native_color: cluster-ip allocation score on ET-IPBX-2: 3131
>>>>> native_color: cluster-asterisk allocation score on ET-IPBX-1: -INFINITY
>>>>> native_color: cluster-asterisk allocation score on ET-IPBX-2: 1515
>>>>> native_color: cluster-apache2 allocation score on ET-IPBX-1: -INFINITY
>>>>> native_color: cluster-apache2 allocation score on ET-IPBX-2: 707
>>>>> native_color: cluster-tftp allocation score on ET-IPBX-1: -INFINITY
>>>>> native_color: cluster-tftp allocation score on ET-IPBX-2: 303
>>>>> native_color: cluster-ntp allocation score on ET-IPBX-1: -INFINITY
>>>>> native_color: cluster-ntp allocation score on ET-IPBX-2: 101
>>>>> _______________________________________________
>>>>> Pacemaker mailing list:
>>>>>
>>>>> Pacemaker at oss.clusterlabs.org
>>>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>>>
>>>>>
>>>>>
>>>>> Project Home:
>>>>>
>>>>> http://www.clusterlabs.org
>>>>>
>>>>>
>>>>> Getting started:
>>>>>
>>>>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>>>>
>>>>>
>>>>> Bugs:
>>>>>
>>>>> http://bugs.clusterlabs.org
>>>>
>>>> _______________________________________________
>>>> Pacemaker mailing list:
>>>>
>>>> Pacemaker at oss.clusterlabs.org
>>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>>
>>>>
>>>>
>>>> Project Home:
>>>>
>>>> http://www.clusterlabs.org
>>>>
>>>>
>>>> Getting started:
>>>>
>>>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>>>
>>>>
>>>> Bugs:
>>>>
>>>> http://bugs.clusterlabs.org
>>> _______________________________________________
>>> Pacemaker mailing list:
>>> Pacemaker at oss.clusterlabs.org
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>>
>>> Project Home:
>>> http://www.clusterlabs.org
>>>
>>> Getting started:
>>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>>
>>> Bugs:
>>> http://bugs.clusterlabs.org
>>
>>
>> _______________________________________________
>> Pacemaker mailing list:
>> Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>>
>> Project Home:
>> http://www.clusterlabs.org
>>
>> Getting started:
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>
>> Bugs:
>> http://bugs.clusterlabs.org
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140528/46e83065/attachment-0004.sig>
More information about the Pacemaker
mailing list