[ClusterLabs] IP clone issue

emmanuel segura emi2fast at gmail.com
Tue Sep 5 11:29:19 EDT 2017


If you have two copy of the clone in the same, it cannot work, because is
like to have a dupplicate ip in the same node, because you are using
clone-node-max="2"

2017-09-05 16:15 GMT+02:00 Octavian Ciobanu <coctavian1979 at gmail.com>:

> Based on ocf:heartbeat:IPaddr2 man page it can be used without an static
> IP address if the kernel has net.ipv4.conf.all.promote_secondaries=1.
>
> "There must be at least one static IP address, which is not managed by the
> cluster, assigned to the network interface. If you can not assign any
> static IP address on the interface, modify this kernel parameter: sysctl -w
> net.ipv4.conf.all.promote_secondaries=1 (or per device)"
>
> This kernel parameter is set by default in CentOS 7.3.
>
> With clone-node-max="1" it works as it should be but with
> clone-node-max="2" both instances of VIP are started on the same node even
> if the other node is online.
>
> Pacemaker 1.1 Cluster from Scratch say that
> "clone-node-max=2 says that one node can run up to 2 instances of the
> clone. This should also equal the number of nodes that can host the IP, so
> that if any node goes down, another node can take over the failed node’s
> "request bucket". Otherwise, requests intended for the failed node would be
> discarded."
>
> To have this functionality do I must have a static IP set on the
> interfaces ?
>
>
>
> On Tue, Sep 5, 2017 at 4:54 PM, emmanuel segura <emi2fast at gmail.com>
> wrote:
>
>> I never tried to set an virtual ip in one interfaces without ip, because
>> the vip is a secondary ip that switch between nodes, not primary ip
>>
>> 2017-09-05 15:41 GMT+02:00 Octavian Ciobanu <coctavian1979 at gmail.com>:
>>
>>> Hello all,
>>>
>>> I've encountered an issue with IP cloning.
>>>
>>> Based the "Pacemaker 1.1 Clusters from Scratch" I've configured a test
>>> configuration with 2 nodes based on CentOS 7.3. The nodes have 2 Ethernet
>>> cards one for cluster communication with private IP network and second for
>>> public access to services. The public Ethernet has no IP assigned at boot.
>>>
>>> I've created an IP resource with clone using the following command
>>>
>>> pcs resource create ClusterIP ocf:heartbeat:IPaddr2 params nic="ens192"
>>> ip="xxx.yyy.zzz.www" cidr_netmask="24" clusterip_hash="sourceip" op start
>>> interval="0" timeout="20" op stop interval="0" timeout="20" op monitor
>>> interval="10" timeout="20" meta resource-stickiness=0 clone meta
>>> clone-max="2" clone-node-max="2" interleave="true" globally-unique="true"
>>>
>>> The xxx.yyy.zzz.www is public IP not a private one.
>>>
>>> With the above command the IP clone is created but it is started only on
>>> one node. This is the output of pcs status command
>>>
>>> Clone Set: ClusterIP-clone [ClusterIP] (unique)
>>>      ClusterIP:0    (ocf::heartbeat:IPaddr2):    Started node02
>>>      ClusterIP:1    (ocf::heartbeat:IPaddr2):    Started node02
>>>
>>> If I modify the clone-node-max to 1 then the resource is started on both
>>> nodes as seen in this pcs status output:
>>>
>>> Clone Set: ClusterIP-clone [ClusterIP] (unique)
>>>      ClusterIP:0    (ocf::heartbeat:IPaddr2):    Started node02
>>>      ClusterIP:1    (ocf::heartbeat:IPaddr2):    Started node01
>>>
>>> But if one node fails the IP resource is not migrated to active node as
>>> is said in documentation.
>>>
>>> Clone Set: ClusterIP-clone [ClusterIP] (unique)
>>>      ClusterIP:0    (ocf::heartbeat:IPaddr2):    Started node02
>>>      ClusterIP:1    (ocf::heartbeat:IPaddr2):    Stopped
>>>
>>> When the IP is active on both nodes the services are accessible so there
>>> is not an issue with the fact that the interface dose not have an IP
>>> allocated at boot. The gateway is set with another pcs command and it is
>>> working.
>>>
>>> Thank in advance for any info.
>>>
>>> Best regards
>>> Octavian Ciobanu
>>>
>>> _______________________________________________
>>> Users mailing list: Users at clusterlabs.org
>>> http://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>>>
>>
>>
>> --
>>   .~.
>>   /V\
>>  //  \\
>> /(   )\
>> ^`~'^
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>


-- 
  .~.
  /V\
 //  \\
/(   )\
^`~'^
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20170905/ff49ded3/attachment-0003.html>


More information about the Users mailing list