[Pacemaker] Breaking dependency loop && stonith
Andrew Beekhof
andrew at beekhof.net
Thu Jan 9 22:30:30 UTC 2014
On 9 Jan 2014, at 5:05 pm, Andrey Groshev <greenx at yandex.ru> wrote:
>
>
> 08.01.2014, 06:15, "Andrew Beekhof" <andrew at beekhof.net>:
>> On 27 Nov 2013, at 12:26 am, Andrey Groshev <greenx at yandex.ru> wrote:
>>
>>> Hi, ALL.
>>>
>>> I want to clarify two more questions.
>>> After stonith reboot - this node hangs with status "pending".
>>> The logs found string .....
>>>
>>> info: rsc_merge_weights: pgsql:1: Breaking dependency loop at msPostgresql
>>> info: rsc_merge_weights: pgsql:2: Breaking dependency loop at msPostgresql
>>>
>>> This means that breaking search the depends, because they are no more.
>>> Or interrupted by an infinite loop for search the dependency?
>>
>> The second one, but it has nothing to do with a node being in the "pending" state.
>> Where did you see this?
>
> Ok, I've already understood this the problem.
> I have "location" for right promote|demote resource.
> And too same logic trough "collocation"/"order".
> As I thought, they do the same thing
No, collocation and ordering are orthogonal concepts and do not at all do the same thing.
See the docs.
> and collisions should not happen.
> At least on the old cluster it works :)
> Now I have removed all unnecessary.
>
>
>>
>>> And two.
>>> Do I need to clone the stonith resource now (In PCMK 1.1.11)?
>>
>> No.
>>
>>> On the one hand, I see this resource on all nodes through command.
>>> # cibadmin -Q|grep stonith
>>> <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled"/>
>>> <primitive id="st1" class="stonith" type="external/sshbykey">
>>> <lrm_resource id="st1" type="external/sshbykey" class="stonith">
>>> <lrm_resource id="st1" type="external/sshbykey" class="stonith">
>>> <lrm_resource id="st1" type="external/sshbykey" class="stonith">
>>> (without pending node)
>>
>> Like all resources, we check all nodes at startup to see if it is already active.
>>
>>> On the other hand, another command I see only one instance on a particular node.
>>> # crm_verify -LVVVV
>>> info: main: =#=#=#=#= Getting XML =#=#=#=#=
>>> info: main: Reading XML from: live cluster
>>> info: validate_with_relaxng: Creating RNG parser context
>>> info: determine_online_status_fencing: Node dev-cluster2-node4 is active
>>> info: determine_online_status: Node dev-cluster2-node4 is online
>>> info: determine_online_status_fencing: - Node dev-cluster2-node1 is not ready to run resources
>>> info: determine_online_status_fencing: Node dev-cluster2-node2 is active
>>> info: determine_online_status: Node dev-cluster2-node2 is online
>>> info: determine_online_status_fencing: Node dev-cluster2-node3 is active
>>> info: determine_online_status: Node dev-cluster2-node3 is online
>>> info: determine_op_status: Operation monitor found resource pingCheck:0 active on dev-cluster2-node4
>>> info: native_print: VirtualIP (ocf::heartbeat:IPaddr2): Started dev-cluster2-node4
>>> info: clone_print: Master/Slave Set: msPostgresql [pgsql]
>>> info: short_print: Masters: [ dev-cluster2-node4 ]
>>> info: short_print: Slaves: [ dev-cluster2-node2 dev-cluster2-node3 ]
>>> info: short_print: Stopped: [ dev-cluster2-node1 ]
>>> info: clone_print: Clone Set: clnPingCheck [pingCheck]
>>> info: short_print: Started: [ dev-cluster2-node2 dev-cluster2-node3 dev-cluster2-node4 ]
>>> info: short_print: Stopped: [ dev-cluster2-node1 ]
>>> info: native_print: st1 (stonith:external/sshbykey): Started dev-cluster2-node4
>>> info: native_color: Resource pingCheck:3 cannot run anywhere
>>> info: native_color: Resource pgsql:3 cannot run anywhere
>>> info: rsc_merge_weights: pgsql:1: Breaking dependency loop at msPostgresql
>>> info: rsc_merge_weights: pgsql:2: Breaking dependency loop at msPostgresql
>>> info: master_color: Promoting pgsql:0 (Master dev-cluster2-node4)
>>> info: master_color: msPostgresql: Promoted 1 instances of a possible 1 to master
>>> info: LogActions: Leave VirtualIP (Started dev-cluster2-node4)
>>> info: LogActions: Leave pgsql:0 (Master dev-cluster2-node4)
>>> info: LogActions: Leave pgsql:1 (Slave dev-cluster2-node2)
>>> info: LogActions: Leave pgsql:2 (Slave dev-cluster2-node3)
>>> info: LogActions: Leave pgsql:3 (Stopped)
>>> info: LogActions: Leave pingCheck:0 (Started dev-cluster2-node4)
>>> info: LogActions: Leave pingCheck:1 (Started dev-cluster2-node2)
>>> info: LogActions: Leave pingCheck:2 (Started dev-cluster2-node3)
>>> info: LogActions: Leave pingCheck:3 (Stopped)
>>> info: LogActions: Leave st1 (Started dev-cluster2-node4)
>>>
>>> However, if I do a "clone" - it turns out the same garbage.
>>>
>>> _______________________________________________
>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>
>> ,
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140110/e3046aa3/attachment-0004.sig>
More information about the Pacemaker
mailing list