[Pacemaker] [Question] About "quorum-policy=freeze" and "promote".

emmanuel segura emi2fast at gmail.com
Thu May 8 09:24:22 EDT 2014


Why are you using ssh as stonith? i don't think the fencing is working
because your nodes are in unclean state


2014-05-08 5:37 GMT+02:00 <renayama19661014 at ybb.ne.jp>:

> Hi All,
>
> I composed Master/Slave resource of three nodes that set
> quorum-policy="freeze".
> (I use Stateful in Master/Slave resource.)
>
> ---------------------------------
> Current DC: srv01 (3232238280) - partition with quorum
> Version: 1.1.11-830af67
> 3 Nodes configured
> 9 Resources configured
>
>
> Online: [ srv01 srv02 srv03 ]
>
>  Resource Group: grpStonith1
>      prmStonith1-1      (stonith:external/ssh): Started srv02
>  Resource Group: grpStonith2
>      prmStonith2-1      (stonith:external/ssh): Started srv01
>  Resource Group: grpStonith3
>      prmStonith3-1      (stonith:external/ssh): Started srv01
>  Master/Slave Set: msPostgresql [pgsql]
>      Masters: [ srv01 ]
>      Slaves: [ srv02 srv03 ]
>  Clone Set: clnPingd [prmPingd]
>      Started: [ srv01 srv02 srv03 ]
> ---------------------------------
>
>
> Master resource starts in all nodes when I interrupt the internal
> communication of all nodes.
>
> ---------------------------------
> Node srv02 (3232238290): UNCLEAN (offline)
> Node srv03 (3232238300): UNCLEAN (offline)
> Online: [ srv01 ]
>
>  Resource Group: grpStonith1
>      prmStonith1-1      (stonith:external/ssh): Started srv02
>  Resource Group: grpStonith2
>      prmStonith2-1      (stonith:external/ssh): Started srv01
>  Resource Group: grpStonith3
>      prmStonith3-1      (stonith:external/ssh): Started srv01
>  Master/Slave Set: msPostgresql [pgsql]
>      Masters: [ srv01 ]
>      Slaves: [ srv02 srv03 ]
>  Clone Set: clnPingd [prmPingd]
>      Started: [ srv01 srv02 srv03 ]
> (snip)
> Node srv01 (3232238280): UNCLEAN (offline)
> Node srv03 (3232238300): UNCLEAN (offline)
> Online: [ srv02 ]
>
>  Resource Group: grpStonith1
>      prmStonith1-1      (stonith:external/ssh): Started srv02
>  Resource Group: grpStonith2
>      prmStonith2-1      (stonith:external/ssh): Started srv01
>  Resource Group: grpStonith3
>      prmStonith3-1      (stonith:external/ssh): Started srv01
>  Master/Slave Set: msPostgresql [pgsql]
>      Masters: [ srv01 srv02 ]
>      Slaves: [ srv03 ]
>  Clone Set: clnPingd [prmPingd]
>      Started: [ srv01 srv02 srv03 ]
> (snip)
> Node srv01 (3232238280): UNCLEAN (offline)
> Node srv02 (3232238290): UNCLEAN (offline)
> Online: [ srv03 ]
>
>  Resource Group: grpStonith1
>      prmStonith1-1      (stonith:external/ssh): Started srv02
>  Resource Group: grpStonith2
>      prmStonith2-1      (stonith:external/ssh): Started srv01
>  Resource Group: grpStonith3
>      prmStonith3-1      (stonith:external/ssh): Started srv01
>  Master/Slave Set: msPostgresql [pgsql]
>      Masters: [ srv01 srv03 ]
>      Slaves: [ srv02 ]
>  Clone Set: clnPingd [prmPingd]
>      Started: [ srv01 srv02 srv03 ]
> ---------------------------------
>
> I think even if the cluster loses Quorum, being "promote" the Master /
> Slave resource that's specification of Pacemaker.
>
> Is it responsibility of the resource agent side to prevent a state of
> these plural Master?
>  * I think that drbd-RA has those functions.
>  * But, there is no function in Stateful-RA.
>  * As an example, I think that the mechanism such as drbd is necessary by
> all means when I make a resource of Master/Slave newly.
>
> Will my understanding be wrong?
>
> Best Regards,
> Hideo Yamauchi.
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



-- 
esta es mi vida e me la vivo hasta que dios quiera
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140508/66f1d272/attachment-0003.html>


More information about the Pacemaker mailing list