[ClusterLabs] MySQL resource causes error "0_monitor_20000".
Andrei Borzenkov
arvidjaar at gmail.com
Tue Aug 18 06:58:36 UTC 2015
On Tue, Aug 18, 2015 at 9:15 AM, Kiwamu Okabe <kiwamu at debian.or.jp> wrote:
> Hi Andrei,
>
> On Tue, Aug 18, 2015 at 2:24 PM, Andrei Borzenkov <arvidjaar at gmail.com> wrote:
>>> I made master-master replication on Pacemaker.
>>> But it causes error "0_monitor_20000".
>>
>> It's not an error, it is just operation name.
>
> Sorry. I'm comfused.
>
>>> If one of them boots Heartbeat and another doesn't, the error doesn't occur.
>>>
>>> What should I check?
>>
>> Probably you have to allow more than one master (default is just one); see description of master-max resource option.
>
> I used following settings:
>
> ```
> centillion.db01# crm configure
> crm(live)configure# primitive vip_192.168.10.200 ocf:heartbeat:IPaddr2
> params ip="192.168.10.200" cidr_netmask="24" nic="eth0"
> crm(live)configure# property no-quorum-policy="ignore" stonith-enabled="false"
> crm(live)configure# node centillion.db01
> crm(live)configure# node centillion.db02
> crm(live)configure# commit
> crm(live)configure# quit
> centillion.db01# crm
> crm(live)# cib new mysql_repl
> crm(mysql_repl)# configure primitive mysql ocf:heartbeat:mysql params
> binary=/usr/local/mysql/bin/mysqld_safe datadir=/data/mysql
> pid=/data/mysql/mysql.pid socket=/tmp/mysql.sock
> log=/data/mysql/centillion.db.err replication_user=repl
> replication_passwd=slavepass op start interval=0 timeout=120s op stop
> interval=0 timeout=120s op monitor interval=20s timeout=30s op monitor
> interval=10s role=Master timeout=30s op monitor interval=30s
> role=Slave timeout=30s op promote interval=0 timeout=120s op demote
> interval=0 timeout=120s op notify interval=0 timeout=90s
> crm(mysql_repl)# cib commit mysql_repl
> crm(mysql_repl)# quit
> centillion.db01# crm configure ms mysql-clone mysql meta master-max=2
> master-node-max=1 clone-max=2 clone-node-max=1 notify=true
> centillion.db01# crm configure colocation vip_on_mysql inf:
> vip_192.168.10.200 mysql-clone:Master
> centillion.db01# crm configure order vip_after_mysql inf:
> mysql-clone:promote vip_192.168.10.200:start
> ```
>
> Then, I got following result:
>
> ```
> ============
> Last updated: Tue Aug 18 14:42:37 2015
> Stack: Heartbeat
> Current DC: centillion.db02 (0302e3d0-df06-4847-b0f9-9ebddfb6aec7) -
> partition with quorum
> Version: 1.0.13-a83fae5
> 2 Nodes configured, unknown expected votes
> 2 Resources configured.
> ============
>
> Online: [ centillion.db01 centillion.db02 ]
>
> vip_192.168.10.200 (ocf::heartbeat:IPaddr2): Started centillion.db01
> Master/Slave Set: mysql-clone
> Masters: [ centillion.db01 centillion.db02 ]
>
> Failed actions:
> mysql:0_demote_0 (node=centillion.db01, call=11, rc=7,
> status=complete): not running
> ```
>
> It has no error. But my meaning of "master-master replication" is:
>
> A. If both of the nodes lived, one of them becomes master and the
> other becomes slave.
> B. If one of the nodes only lived, the node becomes master.
> C. If a node joins, the node becomes slave.
>
Oh, sorry, I misunderstood you. What you describe falls under
"master-slave" in my vocabulary :)
> How to shape nodes such like above?
>
Did you setup mySQL replication before bringing it under pacemaker
control? If not, my guess is that resource agent sees both instances
as independent and hence masters.
> Thank's, for your advice.
> --
> Kiwamu Okabe at METASEPI DESIGN
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Users
mailing list