[ClusterLabs] Antw: Re: Antw: About globally unique resource instances distribution per node

Daniel Hernández danyboy1104 at gmail.com
Mon Jan 11 15:57:56 UTC 2016


Hi Ulrish, thanks for your response

I have one configuration variant using that approach(using rules). I
tried to use a positive collocation rule with a positive score of
10000, one of the inconvenient was when using the collocation rule
with a single resource and a resource clone of another resource, there
were instances of the resource clone that start on other nodes and not
all the instances on the node with the single resource as the rule
specify, I don't know if this is a bug.
However, when I use a collocation rule with score negative of -10000,
the clones start on nodes different to node with the single resource
of the rule. This way(using anti collocation rules) I could do it, and
work.
But In that way(using rules), I can not control the number of
instances of clones to start on one node or the others.

We have different cluster deploys with different number of nodes, and
different number of clone instances. Also the number of clone
instances is calculated by a program taking as parameter a quality of
service, so we could not modify the number of clone instances to
start. We are trying to avoid the situation in which a resource clone
exceeds a node and other node have free space, as the following
scenario as simple example.

1 node with a capacity 8 with a clone of 5 instances and free space 3
1 node with capacity 8 with another clone of 11 instances.

clone 1 + clone 2 = 16
capacity node1 mas capacity node 2 = 16

Regards.


On 1/11/16, Ulrich Windl <Ulrich.Windl at rz.uni-regensburg.de> wrote:
>>>> Daniel Hernández <danyboy1104 at gmail.com> schrieb am 30.12.2015 um 19:43
>>>> in
> Nachricht
> <CAMhkz_DVOvO=SOrCDKTqERFDwoVfAWDd6=WfHvggXLKCkksUwQ at mail.gmail.com>:
>> On 12/30/15, Ulrich Windl <Ulrich.Windl at rz.uni-regensburg.de> wrote:
>>> Hi!
>>>
>>> I would expect if you set the cpu utilization per primitive (used in
> clone)
>>> to
>>> one and set the cpu capacity per node to the correct number that no node
> has
>>> more primitives than the cpu number allows and that primitives are
>>> distributed
>>> among all available nodes. Isn't that true in your case?
>>>
>>> What exactly does not work in your opinion?
>>>
>>> Regards,
>>> Ulrich
>>>
>>>>>> Daniel Hernández <danyboy1104 at gmail.com> schrieb am 29.12.2015 um
>>>>>> 16:21
>>> in
>>> Nachricht
>>> <CAMhkz_BUEjidSkeJ7uJrTJ1v-vkA+s2YWR69PF6=gOwrOSFwUg at mail.gmail.com>:
>>>> Good day, I work at Datys Soluciones Tecnológicas, we use Corosync and
>>>> Pacemaker in production to run a service infrastructure since 3 years
>>>> ago. Versions
>>>> used are Centos 6.3, corosync 1.4.1 and pacemaker 1.1.7. We have a web
>>>> server, gearman
>>>> job manager, and globally unique resource clones as gearman workers to
>>>> balance the load distributed by gearman. My question is if there exist
>>>> a way or a workaround to configure globally unique resource clone
>>>> number of instances to start per node. As example: Say have 3 nodes:
>>>> node1, node2 and node3, and a globally unique resource clone of 6
>>>> instances with name clone_example and want to start 1 instance on
>>>> node1, 2 instances on node2 and 3 instances on node3, as the following
>>>> example shows.
>>>>
>>>> Clone Set: clone_example [example] (unique)
>>>>          example:0 (ocf:heartbeat:example): Started nodo3
>>>>          example:1 (ocf:heartbeat:example): Started nodo2
>>>>          example:2 (ocf:heartbeat:example): Started nodo2
>>>>          example:3 (ocf:heartbeat:example): Started nodo1
>>>>          example:4 (ocf:heartbeat:example): Started nodo3
>>>>          example:5 (ocf:heartbeat:example): Started nodo3
>>>>
>>>> The reason we want to configure the resource this way is because one
>>>> resource clone instance consume one node cpu, and the nodes have
>>>> different number of cpu:
>>>> node1 = 1cpu, node2 = 2cpu, node3 = 3cpu in the example.
>>>>
>>>> I read Pacemaker Cluster from Scratch and Cluster Configuration
>>>> Explained to find a way and see Chapter 11. Utilization and Placement
>>>> Strategy. I make a test with clones and resources but the clones where
>>>> not distributed as I expected and some instances were not started, I
>>>> used the 3 placement-strategies and similar behaviour. I know the
>>>> cluster use a
>>>> best effort algorithm to distribute the resources when this option is
>>>> used, and maybe that's the reason, so I am searching for a way to do
>>>> it. I browse the mailing list archives to see if there exist a similar
>>>> post on this topic and couldn't find it, maybe I miss it. Any response
>>>> will be appreciated.
>>>> Thanks for your time
>>>>
>>>> _______________________________________________
>>>> Users mailing list: Users at clusterlabs.org
>>>> http://clusterlabs.org/mailman/listinfo/users
>>>>
>>>> Project Home: http://www.clusterlabs.org
>>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>>>
>>>> Bugs: http://bugs.clusterlabs.org
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list: Users at clusterlabs.org
>>> http://clusterlabs.org/mailman/listinfo/users
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>>
>> Hi Ulrich, thanks for your response. I take your sugestion and test my
>> example.
>> I create a 3 node cluster with one dummy resource, use the following
>> commands.
>>
>> crm configure primitive example1 ocf:heartbeat:Dummy \
>> op monitor interval=30s
>>
>> crm configure clone clone_example1 example1 \
>> 	meta globally-unique="true" clone-max="6" clone-node-max="6"
>>
>>
>> crm node utilization node1 set cpu 1
>> crm node utilization node2 set cpu 2
>> crm node utilization node3 set cpu 3
>> crm resource utilization example1 set cpu 1
>> crm configure property placement-strategy="balanced"
>>
>> Online: [ node1 node2 node3 ]
>>
>>  Clone Set: clone_example1 [example] (unique)
>>      example1:0	(ocf::heartbeat:Dummy):	Started node1
>>      example1:1	(ocf::heartbeat:Dummy):	Started node2
>>      example1:2	(ocf::heartbeat:Dummy):	Started node3
>>      example1:3	(ocf::heartbeat:Dummy):	Started node3
>>      example1:4	(ocf::heartbeat:Dummy):	Started node2
>>      example1:5	(ocf::heartbeat:Dummy):	Started node3
>>
>> That work and was no expected by me, it work different on another
>> scenario and is really why my question arrived.
>> I tested an scenario in which I want to run 4 instances of resource
>> example1 on node1, 3 instances on node2, 5 instances on node 3. The
>> cpu capacity per node is, on node1 6, on node2 9, on node3 8, that
>> because I will have other resources besides example1.
>>
>> With the following cluster configuration:
>>
>> crm node utilization node1 set cpu 6
>> crm node utilization node2 set cpu 9
>> crm node utilization node3 set cpu 8
>>
>> crm resource meta clone_example1 set clone-max 12
>> crm resource meta clone_example1 set clone-node-max 12
>>
>> The result in the cluster is:
>> Online: [ node1 node2 node3 ]
>>
>>  Clone Set: clone_example1 [example1] (unique)
>>      example1:0	(ocf::heartbeat:Dummy):	Started node3
>>      example1:1	(ocf::heartbeat:Dummy):	Started node2
>>      example1:2	(ocf::heartbeat:Dummy):	Started node3
>>      example1:3	(ocf::heartbeat:Dummy):	Started node1
>>      example1:4	(ocf::heartbeat:Dummy):	Started node2
>>      example1:5	(ocf::heartbeat:Dummy):	Started node3
>>      example1:6	(ocf::heartbeat:Dummy):	Started node2
>>      example1:7	(ocf::heartbeat:Dummy):	Started node2
>>      example1:8	(ocf::heartbeat:Dummy):	Started node1
>>      example1:9	(ocf::heartbeat:Dummy):	Started node3
>>      example1:10	(ocf::heartbeat:Dummy):	Started node2
>>      example1:11	(ocf::heartbeat:Dummy):	Started node1
>>
>> The cluster start 3 instances of example1 on node1 and not 4 as I
>> want. That happen when I have more than 1 resource to allocate. I also
>> notice that I am not setting the cluster to start 3 instances of
>> example1 on node1, Is there any way to do it ?
>
> Hi!
>
> Sorry for the late reply, but if you want to prefer a node to get more
> primitives that another, you could try to add a location constraint for
> those
> nodes with most cpus. I haven't tried that, but that should work.
>
> Regards,
> Ulrich
>
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org
>> http://clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>




More information about the Users mailing list