[Pacemaker] [Question] About "quorum-policy=freeze" and "promote".

renayama19661014 at ybb.ne.jp renayama19661014 at ybb.ne.jp
Wed May 7 23:37:45 EDT 2014


Hi All,

I composed Master/Slave resource of three nodes that set quorum-policy="freeze".
(I use Stateful in Master/Slave resource.)

---------------------------------
Current DC: srv01 (3232238280) - partition with quorum
Version: 1.1.11-830af67
3 Nodes configured
9 Resources configured


Online: [ srv01 srv02 srv03 ]

 Resource Group: grpStonith1
     prmStonith1-1      (stonith:external/ssh): Started srv02 
 Resource Group: grpStonith2
     prmStonith2-1      (stonith:external/ssh): Started srv01 
 Resource Group: grpStonith3
     prmStonith3-1      (stonith:external/ssh): Started srv01 
 Master/Slave Set: msPostgresql [pgsql]
     Masters: [ srv01 ]
     Slaves: [ srv02 srv03 ]
 Clone Set: clnPingd [prmPingd]
     Started: [ srv01 srv02 srv03 ]
---------------------------------


Master resource starts in all nodes when I interrupt the internal communication of all nodes.

---------------------------------
Node srv02 (3232238290): UNCLEAN (offline)
Node srv03 (3232238300): UNCLEAN (offline)
Online: [ srv01 ]

 Resource Group: grpStonith1
     prmStonith1-1      (stonith:external/ssh): Started srv02 
 Resource Group: grpStonith2
     prmStonith2-1      (stonith:external/ssh): Started srv01 
 Resource Group: grpStonith3
     prmStonith3-1      (stonith:external/ssh): Started srv01 
 Master/Slave Set: msPostgresql [pgsql]
     Masters: [ srv01 ]
     Slaves: [ srv02 srv03 ]
 Clone Set: clnPingd [prmPingd]
     Started: [ srv01 srv02 srv03 ]
(snip)
Node srv01 (3232238280): UNCLEAN (offline)
Node srv03 (3232238300): UNCLEAN (offline)
Online: [ srv02 ]

 Resource Group: grpStonith1
     prmStonith1-1      (stonith:external/ssh): Started srv02 
 Resource Group: grpStonith2
     prmStonith2-1      (stonith:external/ssh): Started srv01 
 Resource Group: grpStonith3
     prmStonith3-1      (stonith:external/ssh): Started srv01 
 Master/Slave Set: msPostgresql [pgsql]
     Masters: [ srv01 srv02 ]
     Slaves: [ srv03 ]
 Clone Set: clnPingd [prmPingd]
     Started: [ srv01 srv02 srv03 ]
(snip)
Node srv01 (3232238280): UNCLEAN (offline)
Node srv02 (3232238290): UNCLEAN (offline)
Online: [ srv03 ]

 Resource Group: grpStonith1
     prmStonith1-1      (stonith:external/ssh): Started srv02 
 Resource Group: grpStonith2
     prmStonith2-1      (stonith:external/ssh): Started srv01 
 Resource Group: grpStonith3
     prmStonith3-1      (stonith:external/ssh): Started srv01 
 Master/Slave Set: msPostgresql [pgsql]
     Masters: [ srv01 srv03 ]
     Slaves: [ srv02 ]
 Clone Set: clnPingd [prmPingd]
     Started: [ srv01 srv02 srv03 ]
---------------------------------

I think even if the cluster loses Quorum, being "promote" the Master / Slave resource that's specification of Pacemaker.

Is it responsibility of the resource agent side to prevent a state of these plural Master?
 * I think that drbd-RA has those functions.
 * But, there is no function in Stateful-RA.
 * As an example, I think that the mechanism such as drbd is necessary by all means when I make a resource of Master/Slave newly.

Will my understanding be wrong?

Best Regards,
Hideo Yamauchi.





More information about the Pacemaker mailing list