[Pacemaker] Moving cloned resources
Matias R. Cuenca del Rey
maticue at gmail.com
Thu Sep 12 03:03:06 UTC 2013
It worked!! Thank you Andrew :D
Matías R. Cuenca del Rey
On Tue, Sep 10, 2013 at 11:58 AM, Matias R. Cuenca del Rey <
maticue at gmail.com> wrote:
> Thank you Andrew,
>
> If I can't move instances of a clone. How can I do to rebalance my IP
> resource trough the three nodes?
>
> Thanks,
>
> Matías R. Cuenca del Rey
>
>
> On Fri, Aug 9, 2013 at 12:15 PM, Matias R. Cuenca del Rey <
> maticue at gmail.com> wrote:
>
>> [root at www-proxylb01 ~]# rpm -qi pcs
>> Name : pcs Relocations: (not relocatable)
>> Version : 0.9.26 Vendor: CentOS
>> Release : 10.el6_4.1 Build Date: Mon 18 Mar 2013
>> 11:39:25 AM ART
>> Install Date: Tue 04 Jun 2013 05:19:49 PM ART Build Host:
>> c6b10.bsys.dev.centos.org
>> Group : System Environment/Base Source RPM:
>> pcs-0.9.26-10.el6_4.1.src.rpm
>> Size : 254791 License: GPLv2
>> Signature : RSA/SHA1, Mon 18 Mar 2013 12:01:56 PM ART, Key ID
>> 0946fca2c105b9de
>> Packager : CentOS BuildSystem <http://bugs.centos.org>
>> URL : http://github.com/feist/pcs
>> Summary : Pacemaker Configuration System
>> Description :
>> pcs is a corosync and pacemaker configuration tool. It permits users to
>> easily view, modify and created pacemaker based clusters.
>>
>>
>> [root at www-proxylb01 ~]# rpm -qi pacemaker
>> Name : pacemaker Relocations: (not relocatable)
>> Version : 1.1.8 Vendor: CentOS
>> Release : 7.el6 Build Date: Fri 22 Feb 2013
>> 02:07:28 AM ART
>> Install Date: Tue 04 Jun 2013 04:32:24 PM ART Build Host:
>> c6b9.bsys.dev.centos.org
>> Group : System Environment/Daemons Source RPM:
>> pacemaker-1.1.8-7.el6.src.rpm
>> Size : 1269655 License: GPLv2+ and LGPLv2+
>> Signature : RSA/SHA1, Sat 23 Feb 2013 02:41:32 PM ART, Key ID
>> 0946fca2c105b9de
>> Packager : CentOS BuildSystem <http://bugs.centos.org>
>> URL : http://www.clusterlabs.org
>> Summary : Scalable High-Availability cluster resource manager
>> Description :
>> Pacemaker is an advanced, scalable High-Availability cluster resource
>> manager for Linux-HA (Heartbeat) and/or Corosync.
>>
>> It supports "n-node" clusters with significant capabilities for
>> managing resources and dependencies.
>>
>> It will run scripts at initialization, when machines go up or down,
>> when related resources fail and can be configured to periodically check
>> resource health.
>>
>> Available rpmbuild rebuild options:
>> --with(out) : heartbeat cman corosync doc publican snmp esmtp
>> pre_release
>>
>>
>> [root at www-proxylb01 ~]# rpm -qi corosync
>> Name : corosync Relocations: (not relocatable)
>> Version : 1.4.1 Vendor: CentOS
>> Release : 15.el6_4.1 Build Date: Tue 14 May 2013
>> 06:09:27 PM ART
>> Install Date: Tue 04 Jun 2013 04:32:24 PM ART Build Host:
>> c6b7.bsys.dev.centos.org
>> Group : System Environment/Base Source RPM:
>> corosync-1.4.1-15.el6_4.1.src.rpm
>> Size : 438998 License: BSD
>> Signature : RSA/SHA1, Tue 14 May 2013 08:03:55 PM ART, Key ID
>> 0946fca2c105b9de
>> Packager : CentOS BuildSystem <http://bugs.centos.org>
>> URL : http://ftp.corosync.org
>> Summary : The Corosync Cluster Engine and Application Programming
>> Interfaces
>> Description :
>> This package contains the Corosync Cluster Engine Executive, several
>> default
>> APIs and libraries, default configuration files, and an init script.
>>
>>
>> Thanks,
>>
>>
>> Matías R. Cuenca del Rey
>>
>>
>> On Thu, Aug 8, 2013 at 7:44 PM, Chris Feist <cfeist at redhat.com> wrote:
>>
>>> On 08/08/2013 01:25 PM, Matias R. Cuenca del Rey wrote:
>>>
>>>> Hi,
>>>>
>>>> This is my first mail. I'm playing with active/active cluster with
>>>> cman+pacemaker
>>>> I have 3 nodes working great. When I reboot one node, my IP resource
>>>> move to
>>>> another node, but when the rebooted node comes back, my IP resource
>>>> doesn't move
>>>> in again. I tried to move mannualy with pcs but I get the following
>>>> error:
>>>>
>>>> [root at www-proxylb01 ~]# pcs config
>>>> Corosync Nodes:
>>>>
>>>> Pacemaker Nodes:
>>>> www-proxylb01 www-proxylb02 www-proxylb03
>>>>
>>>> Resources:
>>>> Clone: ip-xxx.xxx.xxx.xxx-clone
>>>> Resource: ip-xxx.xxx.xxx.xxx (provider=heartbeat type=IPaddr2
>>>> class=ocf)
>>>> Attributes: ip=xxx.xxx.xxx.xxx cidr_netmask=32
>>>> clusterip_hash=sourceip-**sourceport
>>>> Operations: monitor interval=30s
>>>> Clone: fs-usr.share.haproxy-clone
>>>> Resource: fs-usr.share.haproxy (provider=heartbeat type=Filesystem
>>>> class=ocf)
>>>> Attributes: device=/dev/xvdc directory=/usr/share/haproxy/
>>>> fstype=gfs2
>>>> Clone: haproxy-xxx.xxx.xxx.xxx-clone
>>>> Resource: haproxy-xxx.xxx.xxx.xxx (provider=heartbeat type=haproxy
>>>> class=ocf)
>>>> Attributes: conffile=/etc/haproxy/haproxy.**cfg
>>>> Operations: monitor interval=30s
>>>>
>>>> Location Constraints:
>>>> Ordering Constraints:
>>>> ip-xxx.xxx.xxx.xxx-clone then haproxy-xxx.xxx.xxx.xxx-clone
>>>> fs-usr.share.haproxy-clone then haproxy-xxx.xxx.xxx.xxx-clone
>>>> Colocation Constraints:
>>>> haproxy-xxx.xxx.xxx.xxx-clone with ip-xxx.xxx.xxx.xxx-clone
>>>> haproxy-xxx.xxx.xxx.xxx-clone with fs-usr.share.haproxy-clone
>>>> fs-usr.share.haproxy-clone with ip-xxx.xxx.xxx.xxx-clone
>>>>
>>>> Cluster Properties:
>>>> dc-version: 1.1.8-7.el6-394e906
>>>> cluster-infrastructure: cman
>>>> expected-quorum-votes: 2
>>>> stonith-enabled: false
>>>> resource-stickiness: 100
>>>>
>>>>
>>>> [root at www-proxylb01 ~]# pcs status
>>>> Last updated: Thu Aug 8 15:17:09 2013
>>>> Last change: Wed Aug 7 16:32:10 2013 via crm_attribute on www-proxylb01
>>>> Stack: cman
>>>> Current DC: www-proxylb03 - partition with quorum
>>>> Version: 1.1.8-7.el6-394e906
>>>> 3 Nodes configured, 2 expected votes
>>>> 9 Resources configured.
>>>>
>>>>
>>>> Online: [ www-proxylb01 www-proxylb02 www-proxylb03 ]
>>>>
>>>> Full list of resources:
>>>>
>>>> Clone Set: ip-xxx.xxx.xxx.xxx-clone [ip-xxx.xxx.xxx.xxx] (unique)
>>>> ip-xxx.xxx.xxx.xxx:0 (ocf::heartbeat:IPaddr2): Started
>>>> www-proxylb01
>>>> ip-xxx.xxx.xxx.xxx:1 (ocf::heartbeat:IPaddr2): Started
>>>> www-proxylb01
>>>> ip-xxx.xxx.xxx.xxx:2 (ocf::heartbeat:IPaddr2): Started
>>>> www-proxylb03
>>>> Clone Set: fs-usr.share.haproxy-clone [fs-usr.share.haproxy]
>>>> Started: [ www-proxylb01 www-proxylb03 ]
>>>> Stopped: [ fs-usr.share.haproxy:2 ]
>>>> Clone Set: haproxy-xxx.xxx.xxx.xxx-clone [haproxy-xxx.xxx.xxx.xxx]
>>>> Started: [ www-proxylb01 www-proxylb03 ]
>>>> Stopped: [ haproxy-xxx.xxx.xxx.xxx:2 ]
>>>>
>>>> [root at www-proxylb01 ~]# pcs resource move ip-xxx.xxx.xxx.xxx:1
>>>> www-proxylb02
>>>>
>>>
>>> Which version of pcs, pacemaker and corosync are you running?
>>>
>>> Error moving/unmoving resource
>>>> Error performing operation: Update does not conform to the configured
>>>> schema
>>>>
>>>> Thanks a lot in advance
>>>>
>>>>
>>>> Matías R. Cuenca del Rey
>>>>
>>>>
>>>> ______________________________**_________________
>>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>>> http://oss.clusterlabs.org/**mailman/listinfo/pacemaker<http://oss.clusterlabs.org/mailman/listinfo/pacemaker>
>>>>
>>>> Project Home: http://www.clusterlabs.org
>>>> Getting started: http://www.clusterlabs.org/**
>>>> doc/Cluster_from_Scratch.pdf<http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
>>>> Bugs: http://bugs.clusterlabs.org
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20130912/efcc6f4f/attachment.htm>
More information about the Pacemaker
mailing list