[Pacemaker] MySQL HA-Cluster, error on move
Patric Falinder
patric.falinder at omg.nu
Mon Apr 11 22:14:44 CET 2011
Hi,
I got a MySQL HA-Cluster setup and it's working fairly well except for
one thing and I can't figure out how to fix.
I'm pretty new on the subject and I got the configuration pretty much
from howto's on the internet, though I have set up a HA-Cluster before
just to lean a bit more about cluster etc.
The problem I have is when I need to migrate/move the resources to the
other node, or unmove it, I get this error message and mysqld won't
start/move properly:
Resource Group: mysql
fs_mysql (ocf::heartbeat:Filesystem): Started dbcluster1
ip_mysql (ocf::heartbeat:IPaddr2): Started dbcluster1
mysqld (lsb:mysql): Started dbcluster2 (unmanaged) FAILED
Master/Slave Set: ms_drbd_mysql
Masters: [ dbcluster1 ]
Slaves: [ dbcluster2 ]
mysqld_stop_0 (node=dbcluster2, call=23, rc=1, status=complete): unknown
error
I fix it by simply run a cleanup on the mysqld resource like this:
# crm resource cleanup mysqld
and it starts just fine.
Here is my configuration:
crm(live)configure# show
node dbcluster1 \
attributes standby="off"
node dbcluster2
primitive drbd_mysql ocf:linbit:drbd \
params drbd_resource="dbcluster" \
op monitor interval="15s"
primitive fs_mysql ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/dbcluster"
directory="/mnt/mysql/" fstype="ext3"
primitive ip_mysql ocf:heartbeat:IPaddr2 \
params ip="10.0.0.203" nic="eth1"
primitive mysqld lsb:mysql
group mysql fs_mysql ip_mysql mysqld \
meta target-role="Started" is-managed="true"
ms ms_drbd_mysql drbd_mysql \
meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
colocation mysql_on_drbd inf: mysql ms_drbd_mysql:Master
order mysql_after_drbd inf: ms_drbd_mysql:promote mysql:start
property $id="cib-bootstrap-options" \
no-quorum-policy="ignore" \
stonith-enabled="false" \
expected-quorum-votes="2" \
dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \
cluster-infrastructure="openais" \
last-lrm-refresh="1302503942"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
I know there may be some stuff that I don't need in the configuration!?
But I'm still learning so give me pointers if you want to.
Right now I don't have STONITH configured because I run the nodes on two
different VMware ESXi hosts and right now we don't have any license for
them so I can't use libvirt to reboot them but we are going to fix that
soon.
So all I want is to have dbcluster1 as "master" node where the resources
are started by default but when that fails the migrate over to
dbcluster2, then maybe have it migrate back over when dbcluster1 is back
online or do this manually!?
I also have one more question. When I run "move mysql dbcluster2" for
example, I need to run "unmove mysql" otherwise it won't migrate back
over if that server fails, right!? Then can I see somewhere if I need to
run "unmove" or if it will migrate over as it should?
Info about the nodes:
Both run Debian 5.0
Corosync 1.2.1-1~bpo50+1
Pacemaker 1.0.9.1+hg15626-1~bpo50+1
I share the data with DRBD
DRBD 2:8.3.7-1~bpo50+1+2.6.26-26lenny1
DRBD8-source 2:8.3.7-1~bpo50+1
MySQL 5.0.51a-24+lenny4
Just tell me if you need more info.
Thanks,
-Patric F.
More information about the Pacemaker
mailing list