[Pacemaker] Master Slave
Freddie Sessler
nanogger at gmail.com
Fri Jul 23 15:30:16 UTC 2010
Dan thanks for you reply and the documentation. That all makes sense. I
guess what I am confused about how to tell pacemaker to start mysql on both
servers. My limited experience so far has been that pacemaker starts up
mysql on the primary and in the event of a failure it will move the virtual
ip to secondary node and start up the mysql service. The problem is that
mysql is running in master/slave mode for replication and therefore needs to
be running all the time on the secondary. Do I need a clone resource for
this? Thanks again.
F.
On Fri, Jul 23, 2010 at 3:33 AM, Dan Frincu <dfrincu at streamwide.ro> wrote:
> First take a look at this
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> It contains all you need for this kind of setup. I'm not aware if the M/S
> relationship extends to other resources than DRBD, but in this case you
> don't actually need a M/S relationship (from my point of view).
>
> 1. Read in the document above about 'symmetric-cluster'
> 2. Based on results from point 1, set the resources as being allowed either
> to run anywhere or not, and add location constraints for each primitive.
> 3. Define mandatory ordering constraints for the mysql primitive, with a
> higher priority on the node you wish to use as a Primary node and a lower
> priority on the Secondary node (if Primary node fails, the resource will
> failover to Secondary node)
> 4. Define colocation so that VIP will always run where the mysql resource
> runs.
> 5. (optional) define mail alerts when resource failure occurs, define what
> the mysql resource should do when the Primary node recovers, should it
> remain on the Secondary node and be manually moved, should it go back to the
> Primary node by itself, these are things you might want to consider, but are
> not mandatory to the question asked, therefore the "optional" at the
> beginning)
>
> Regards,
> Dan
>
> Freddie Sessler wrote:
>
> I have a quick question is the Master Slave setting in pacemaker only
> allowed in regards to a DRBD device? Can you use it to create other Master
> Slave relationships? Does all resource agents potentially involved in this
> need to be aware of the Master Slave relationship? I am trying to set up a
> pair fo mysql servers One is replicating from the other(handled within
> mysql's my.cnf.) I basically want to fail over the VIP of the primary node
> to the secondary node(which also happens to be the mysql slave) in the event
> that the primary has its mysql server stopped. I am not using DRBD at all.
> My config looks like the following.
>
> node $id="0cd2bb09-00b6-4ce4-bdd1-629767ae0739" sipl-mysql-109
> node $id="119fc082-7046-4b8d-a9a3-7e777b9ddf60" sipl-mysql-209
> primitive p_clusterip ocf:heartbeat:IPaddr2 \
> params ip="10.200.131.9" cidr_netmask="32" \
> op monitor interval="30s"
> primitive p_mysql ocf:heartbeat:mysql \
> op start interval="0" timeout="120" \
> op stop interval="0" timeout="120" \
> op monitor interval="10" timeout="120" depth="0"
> ms ms_mysql p_mysql \
> meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
> location l_master ms_mysql \
> rule $id="l_master-rule" $role="Master" 100: #uname eq sipl-mysql-109
> colocation mysql_master_on_ip inf: p_clusterip ms_mysql:Master
> property $id="cib-bootstrap-options" \
> stonith-enabled="false" \
> no-quorum-policy="ignore" \
> start-failure-is-fatal="false" \
> expected-quorum-votes="2" \
> symmetric-cluster="false" \
> dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \
> cluster-infrastructure="Heartbeat"
> rsc_defaults $id="rsc-options" \
> resource-stickiness="100"
>
>
> What's happening is that mysql is never brought up due to the following
> errors:
>
> ul 22 16:15:07 sipl-mysql-109 pengine: [22890]: info: native_color:
> Resource p_mysql:0 cannot run anywhere
> Jul 22 16:15:07 sipl-mysql-109 pengine: [22890]: info: native_color:
> Resource p_mysql:1 cannot run anywhere
> Jul 22 16:15:07 sipl-mysql-109 pengine: [22890]: info:
> native_merge_weights: ms_mysql: Rolling back scores from p_clusterip
> Jul 22 16:15:07 sipl-mysql-109 pengine: [22890]: info: master_color:
> ms_mysql: Promoted 0 instances of a possible 1 to master
> Jul 22 16:15:07 sipl-mysql-109 pengine: [22890]: info: native_color:
> Resource p_clusterip cannot run anywhere
> Jul 22 16:15:07 sipl-mysql-109 pengine: [22890]: info: master_color:
> ms_mysql: Promoted 0 instances of a possible 1 to master
> Jul 22 16:15:07 sipl-mysql-109 pengine: [22890]: notice: LogActions: Leave
> resource p_clusterip (Stopped)
> Jul 22 16:15:07 sipl-mysql-109 pengine: [22890]: notice: LogActions: Leave
> resource p_mysql:0 (Stopped)
> Jul 22 16:15:07 sipl-mysql-109 pengine: [22890]: notice: LogActions: Leave
> resource p_mysql:1 (Stopped)
>
>
> I thought I may have overcome this with my location and colocation
> directive but it failed. Could someone give me some feedback on what I am
> trying to do, my config and the resulting errors?
>
> Thanks
> F.
>
> ------------------------------
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.orghttp://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>
>
> --
> Dan FRINCU
> Systems Engineer
> CCNA, RHCE
> Streamwide Romania
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs:
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.clusterlabs.org/pipermail/pacemaker/attachments/20100723/4d2f0b42/attachment.htm>
More information about the Pacemaker
mailing list