[ClusterLabs] Clusvcadm -Z substitute in Pacemaker

Ken Gaillot kgaillot at redhat.com
Wed Jul 13 10:48:51 EDT 2016


On 07/13/2016 05:50 AM, emmanuel segura wrote:
> using pcs resource unmanage leave the monitoring resource actived, I
> usually set the monitor interval=0 :)

Yep :)

An easier way is to set "enabled=false" on the monitor, so you don't
have to remember what your interval was later. You can set it in the
op_defaults section to disable all operations at once (assuming no
operation has "enabled=true" explicitly set).

Similarly, you can set is_managed=false in rsc_defaults to unmanage all
resources (that don't have "is_managed=true" explicitly set).

> 2016-07-11 10:43 GMT+02:00 Tomas Jelinek <tojeline at redhat.com>:
>> Dne 9.7.2016 v 06:39 jaspal singla napsal(a):
>>>
>>> Hello Everyone,
>>>
>>> I need little help, if anyone can give some pointers, it would help me a
>>> lot.
>>>
>>> In RHEL-7.x:
>>>
>>> There is concept of pacemaker and when I use the below command to freeze
>>> my resource group operation, it actually stops all of the resources
>>> associated under the resource group.
>>>
>>> # pcs cluster standby <node>
>>>
>>> # pcs cluster unstandby <node>
>>>
>>> Result:  This actually stops all of the resource group in that node
>>> (ctm_service is one of the resource group, which gets stop including
>>> database as well, it goes to MOUNT mode)
>>
>>
>> Hello Jaspal,
>>
>> that's what it's supposed to do. Putting a node into standby means the node
>> cannot host any resources.
>>
>>>
>>> However; through clusvcadm command on RHEL-6.x, it doesn't stop the
>>> ctm_service there and my database is in RW mode.
>>>
>>> # clusvcadm -Z ctm_service
>>>
>>> # clusvcadm -U ctm_service
>>>
>>> So my concern here is - Freezing/unfreezing should not affect the status
>>> of the group. Is there any way around to achieve the same in RHEL-7.x as
>>> well, that was done with clusvcadm on RHEL 6?
>>
>>
>> Maybe you are looking for
>> # pcs resource unmanage <resource>
>> and
>> # pcs resource manage <resource>
>>
>> Regards,
>> Tomas
>>
>>>
>>> Thanks
>>>
>>> Jaspal




More information about the Users mailing list