[Pacemaker] no-quorum-policy = demote?

Gao,Yan ygao at suse.com
Tue May 27 08:28:36 CEST 2014


On 05/27/14 13:34, Andrew Beekhof wrote:
> 
> On 27 May 2014, at 3:12 pm, Gao,Yan <ygao at suse.com> wrote:
> 
>> On 05/27/14 08:07, Andrew Beekhof wrote:
>>>
>>> On 26 May 2014, at 10:47 pm, Christian Ciach <dereineda at gmail.com> wrote:
>>>
>>>> I am sorry to get back to this topic, but I'm genuinely curious:
>>>>
>>>> Why is "demote" an option for the ticket "loss-policy" for multi-site-clusters but not for the normal "no-quorum-policy" of local clusters? This seems like a missing feature to me.
>>>
>>> Or one feature too many.
>>> Perhaps Yan can explain why he wanted demote as an option for the loss-policy.
>> Loss-policy="demote" is a kind of natural default if the "Master" mode
>> of a resource requires a ticket like:
>> <rsc_ticket rsc="ms1" rsc-role="Master" ticket="ticketA"/>
>>
>> The idea is for running stateful resource instances across clusters. And
>> loss-policy="demote" provides the possibility if there's the need to
>> still run the resource in slave mode for any reason when losing the
>> ticket, rather than stopping it or fencing the node hosting it.
> 
> I guess the same logic applies to the single cluster use-case too and we should allow no-quorum-policy=demote.
> 
> One question though... do we still stop non-master/slave resources for loss-policy=demote?
Yes, we currently demote them from "Started" to "Stopped" ;-)
Loss-policy is specified for per-dependency (rsc_ticket). Strictly
speaking, it actually doesn't make sense to specify loss-policy=demote
for a non-m/s resource. While we assume the user needs some sort of
"demote" if he does so.

Hmm, not quite sure what's the best for no-quorum-policy="demote"
though, since it'd be global. In either way, it seems we'd better let
"demote" behave consistently.

Regards,
  Yan



> 
>>
>> Regards,
>>  Yan
>>
>>>
>>>>
>>>> Best regards
>>>> Christian
>>>>
>>>>
>>>> 2014-04-07 9:54 GMT+02:00 Christian Ciach <dereineda at gmail.com>:
>>>> Hello,
>>>>
>>>> I am using Corosync 2.0 with Pacemaker 1.1 on Ubuntu Server 14.04 (daily builds until final release).
>>>>
>>>> My problem is as follows: I have a 2-node (plus a quorum-node) cluster to manage a multistate-resource. One node should be the master and the other one the slave. It is absolutely not allowed to have two masters at the same time. To prevent a split-brain situation, I am also using a third node as a quorum-only node (set to standby). There is no redundant connection because the nodes are connected over the internet.
>>>>
>>>> If one of the two nodes managing the resource becomes disconnected, it loses quorum. In this case, I want this resource to become a slave, but the resource should never be stopped completely! This leaves me with a problem: "no-quorum-policy=stop" will stop the resource, while "no-quorum-policy=ignore" will keep this resource in a master-state. I already tried to demote the resource manually inside the monitor-action of the OCF-agent, but pacemaker will promote the resource immediately again.
>>>>
>>>> I am aware that I am trying the manage a multi-site-cluster and there is something like the booth-daemon, which sounds like the solution to my problem. But unfortunately I need the location-constraints of pacemaker based on the score of the OCF-agent. As far as I know location-constraints are not possible when using booth, because the 2-node-cluster is essentially split into two 1-node-clusters. Is this correct?
>>>>
>>>> To conclude: Is it possible to demote a resource on quorum loss instead of stopping it? Is booth an option if I need to manage the location of the master based on the score returned by the OCF-agent?
>>>>
>>>>
>>>> _______________________________________________
>>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>>
>>>> Project Home: http://www.clusterlabs.org
>>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>>> Bugs: http://bugs.clusterlabs.org
>>>
>>
>> -- 
>> Gao,Yan <ygao at suse.com>
>> Software Engineer
>> China Server Team, SUSE.
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
> 
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 

-- 
Gao,Yan <ygao at suse.com>
Software Engineer
China Server Team, SUSE.



More information about the Pacemaker mailing list